2023-05-27 11:55:27,360 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3 2023-05-27 11:55:27,378 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-27 11:55:27,416 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=279, ProcessCount=171, AvailableMemoryMB=5144 2023-05-27 11:55:27,423 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 11:55:27,423 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab, deleteOnExit=true 2023-05-27 11:55:27,423 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 11:55:27,424 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/test.cache.data in system properties and HBase conf 2023-05-27 11:55:27,425 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 11:55:27,425 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/hadoop.log.dir in system properties and HBase conf 2023-05-27 11:55:27,426 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 11:55:27,427 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 11:55:27,427 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 11:55:27,543 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-27 11:55:27,933 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 11:55:27,937 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:55:27,938 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:55:27,938 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 11:55:27,938 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:55:27,939 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 11:55:27,939 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 11:55:27,939 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:55:27,940 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:55:27,940 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 11:55:27,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/nfs.dump.dir in system properties and HBase conf 2023-05-27 11:55:27,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/java.io.tmpdir in system properties and HBase conf 2023-05-27 11:55:27,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:55:27,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 11:55:27,942 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 11:55:28,413 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:55:28,428 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:55:28,432 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:55:28,696 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-27 11:55:28,864 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-27 11:55:28,878 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:55:28,911 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:55:28,973 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/java.io.tmpdir/Jetty_localhost_35899_hdfs____l9tso0/webapp 2023-05-27 11:55:29,121 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35899 2023-05-27 11:55:29,129 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:55:29,132 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:55:29,133 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:55:29,581 WARN [Listener at localhost/43439] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:55:29,644 WARN [Listener at localhost/43439] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:55:29,662 WARN [Listener at localhost/43439] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:55:29,669 INFO [Listener at localhost/43439] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:55:29,673 INFO [Listener at localhost/43439] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/java.io.tmpdir/Jetty_localhost_33733_datanode____ta64m6/webapp 2023-05-27 11:55:29,776 INFO [Listener at localhost/43439] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33733 2023-05-27 11:55:30,099 WARN [Listener at localhost/33071] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:55:30,110 WARN [Listener at localhost/33071] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:55:30,113 WARN [Listener at localhost/33071] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:55:30,115 INFO [Listener at localhost/33071] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:55:30,119 INFO [Listener at localhost/33071] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/java.io.tmpdir/Jetty_localhost_36189_datanode____.3gswyy/webapp 2023-05-27 11:55:30,213 INFO [Listener at localhost/33071] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36189 2023-05-27 11:55:30,221 WARN [Listener at localhost/38935] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:55:30,536 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7635480d80c8a7e0: Processing first storage report for DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630 from datanode c88aa5a4-3dea-4b8e-b72b-595ef9242720 2023-05-27 11:55:30,537 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7635480d80c8a7e0: from storage DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630 node DatanodeRegistration(127.0.0.1:46147, datanodeUuid=c88aa5a4-3dea-4b8e-b72b-595ef9242720, infoPort=44865, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=659441217;c=1685188528501), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 11:55:30,537 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x75b22ffe5ab4bb5c: Processing first storage report for DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7 from datanode 3c8f5b42-f01e-45db-b4fa-c833a148e662 2023-05-27 11:55:30,537 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x75b22ffe5ab4bb5c: from storage DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7 node DatanodeRegistration(127.0.0.1:34597, datanodeUuid=3c8f5b42-f01e-45db-b4fa-c833a148e662, infoPort=40173, infoSecurePort=0, ipcPort=33071, storageInfo=lv=-57;cid=testClusterID;nsid=659441217;c=1685188528501), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:55:30,538 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7635480d80c8a7e0: Processing first storage report for DS-c9e44e62-b662-457e-8fc1-6db75ebfbe23 from datanode c88aa5a4-3dea-4b8e-b72b-595ef9242720 2023-05-27 11:55:30,538 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7635480d80c8a7e0: from storage DS-c9e44e62-b662-457e-8fc1-6db75ebfbe23 node DatanodeRegistration(127.0.0.1:46147, datanodeUuid=c88aa5a4-3dea-4b8e-b72b-595ef9242720, infoPort=44865, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=659441217;c=1685188528501), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:55:30,538 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x75b22ffe5ab4bb5c: Processing first storage report for DS-e46b7dd7-1c4e-418f-9ded-0a3fca4687a8 from datanode 3c8f5b42-f01e-45db-b4fa-c833a148e662 2023-05-27 11:55:30,538 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x75b22ffe5ab4bb5c: from storage DS-e46b7dd7-1c4e-418f-9ded-0a3fca4687a8 node DatanodeRegistration(127.0.0.1:34597, datanodeUuid=3c8f5b42-f01e-45db-b4fa-c833a148e662, infoPort=40173, infoSecurePort=0, ipcPort=33071, storageInfo=lv=-57;cid=testClusterID;nsid=659441217;c=1685188528501), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:55:30,605 DEBUG [Listener at localhost/38935] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3 2023-05-27 11:55:30,664 INFO [Listener at localhost/38935] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/zookeeper_0, clientPort=55837, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 11:55:30,678 INFO [Listener at localhost/38935] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55837 2023-05-27 11:55:30,686 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:30,687 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:31,341 INFO [Listener at localhost/38935] util.FSUtils(471): Created version file at hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264 with version=8 2023-05-27 11:55:31,341 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase-staging 2023-05-27 11:55:31,650 INFO [Listener at localhost/38935] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-27 11:55:32,103 INFO [Listener at localhost/38935] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:55:32,134 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:55:32,134 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:55:32,134 INFO [Listener at localhost/38935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:55:32,135 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:55:32,135 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:55:32,272 INFO [Listener at localhost/38935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:55:32,353 DEBUG [Listener at localhost/38935] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-27 11:55:32,444 INFO [Listener at localhost/38935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35589 2023-05-27 11:55:32,454 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:32,456 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:32,475 INFO [Listener at localhost/38935] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35589 connecting to ZooKeeper ensemble=127.0.0.1:55837 2023-05-27 11:55:32,513 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:355890x0, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:55:32,515 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35589-0x1006c7ed4dd0000 connected 2023-05-27 11:55:32,538 DEBUG [Listener at localhost/38935] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:55:32,539 DEBUG [Listener at localhost/38935] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:55:32,542 DEBUG [Listener at localhost/38935] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:55:32,549 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35589 2023-05-27 11:55:32,550 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35589 2023-05-27 11:55:32,550 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35589 2023-05-27 11:55:32,550 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35589 2023-05-27 11:55:32,551 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35589 2023-05-27 11:55:32,557 INFO [Listener at localhost/38935] master.HMaster(444): hbase.rootdir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264, hbase.cluster.distributed=false 2023-05-27 11:55:32,635 INFO [Listener at localhost/38935] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:55:32,636 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:55:32,636 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:55:32,636 INFO [Listener at localhost/38935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:55:32,636 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:55:32,636 INFO [Listener at localhost/38935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:55:32,641 INFO [Listener at localhost/38935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:55:32,644 INFO [Listener at localhost/38935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46469 2023-05-27 11:55:32,647 INFO [Listener at localhost/38935] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 11:55:32,653 DEBUG [Listener at localhost/38935] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 11:55:32,655 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:32,656 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:32,658 INFO [Listener at localhost/38935] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46469 connecting to ZooKeeper ensemble=127.0.0.1:55837 2023-05-27 11:55:32,663 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:464690x0, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:55:32,664 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46469-0x1006c7ed4dd0001 connected 2023-05-27 11:55:32,664 DEBUG [Listener at localhost/38935] zookeeper.ZKUtil(164): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:55:32,666 DEBUG [Listener at localhost/38935] zookeeper.ZKUtil(164): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:55:32,667 DEBUG [Listener at localhost/38935] zookeeper.ZKUtil(164): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:55:32,667 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46469 2023-05-27 11:55:32,668 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46469 2023-05-27 11:55:32,671 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46469 2023-05-27 11:55:32,674 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46469 2023-05-27 11:55:32,674 DEBUG [Listener at localhost/38935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46469 2023-05-27 11:55:32,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:32,685 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:55:32,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:32,714 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:55:32,714 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:55:32,714 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:32,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:55:32,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35589,1685188531494 from backup master directory 2023-05-27 11:55:32,717 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:55:32,720 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:32,720 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:55:32,721 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:55:32,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:32,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-27 11:55:32,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-27 11:55:32,814 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase.id with ID: 0ead8a12-0088-4273-80d1-a4cd9bdb146e 2023-05-27 11:55:32,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:32,867 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:32,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3b1e46b2 to 127.0.0.1:55837 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:55:32,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5105ade6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:55:32,964 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:55:32,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 11:55:32,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:55:33,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store-tmp 2023-05-27 11:55:33,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:33,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:55:33,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:55:33,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:55:33,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:55:33,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:55:33,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:55:33,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:55:33,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/WALs/jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:33,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35589%2C1685188531494, suffix=, logDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/WALs/jenkins-hbase4.apache.org,35589,1685188531494, archiveDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/oldWALs, maxLogs=10 2023-05-27 11:55:33,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:55:33,097 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/WALs/jenkins-hbase4.apache.org,35589,1685188531494/jenkins-hbase4.apache.org%2C35589%2C1685188531494.1685188533073 2023-05-27 11:55:33,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:55:33,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:55:33,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:33,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:55:33,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:55:33,164 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:55:33,172 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 11:55:33,199 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 11:55:33,213 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:33,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:55:33,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:55:33,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:55:33,237 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:55:33,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=837852, jitterRate=0.06538429856300354}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:55:33,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:55:33,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 11:55:33,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 11:55:33,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 11:55:33,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 11:55:33,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-27 11:55:33,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 33 msec 2023-05-27 11:55:33,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 11:55:33,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 11:55:33,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 11:55:33,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 11:55:33,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 11:55:33,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 11:55:33,368 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 11:55:33,372 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 11:55:33,375 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:33,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 11:55:33,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 11:55:33,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 11:55:33,393 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:55:33,393 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:55:33,393 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:33,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35589,1685188531494, sessionid=0x1006c7ed4dd0000, setting cluster-up flag (Was=false) 2023-05-27 11:55:33,407 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:33,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 11:55:33,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:33,419 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:33,424 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 11:55:33,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:33,428 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.hbase-snapshot/.tmp 2023-05-27 11:55:33,478 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(951): ClusterId : 0ead8a12-0088-4273-80d1-a4cd9bdb146e 2023-05-27 11:55:33,482 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 11:55:33,488 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 11:55:33,488 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 11:55:33,493 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 11:55:33,494 DEBUG [RS:0;jenkins-hbase4:46469] zookeeper.ReadOnlyZKClient(139): Connect 0x2e6f5d42 to 127.0.0.1:55837 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:55:33,502 DEBUG [RS:0;jenkins-hbase4:46469] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20a8b6ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:55:33,502 DEBUG [RS:0;jenkins-hbase4:46469] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24973632, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:55:33,530 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46469 2023-05-27 11:55:33,535 INFO [RS:0;jenkins-hbase4:46469] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 11:55:33,536 INFO [RS:0;jenkins-hbase4:46469] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 11:55:33,536 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 11:55:33,539 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35589,1685188531494 with isa=jenkins-hbase4.apache.org/172.31.14.131:46469, startcode=1685188532634 2023-05-27 11:55:33,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 11:55:33,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:55:33,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:55:33,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:55:33,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:55:33,558 DEBUG [RS:0;jenkins-hbase4:46469] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 11:55:33,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 11:55:33,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:55:33,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685188563564 2023-05-27 11:55:33,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 11:55:33,568 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:55:33,568 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 11:55:33,573 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:55:33,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 11:55:33,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 11:55:33,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 11:55:33,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 11:55:33,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 11:55:33,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 11:55:33,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 11:55:33,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 11:55:33,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 11:55:33,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 11:55:33,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188533592,5,FailOnTimeoutGroup] 2023-05-27 11:55:33,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188533595,5,FailOnTimeoutGroup] 2023-05-27 11:55:33,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 11:55:33,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,615 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:55:33,616 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:55:33,617 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264 2023-05-27 11:55:33,640 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:33,644 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:55:33,647 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/info 2023-05-27 11:55:33,648 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:55:33,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:33,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:55:33,652 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:55:33,652 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:55:33,653 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:33,653 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:55:33,655 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/table 2023-05-27 11:55:33,656 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:55:33,657 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:33,659 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740 2023-05-27 11:55:33,659 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740 2023-05-27 11:55:33,663 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:55:33,666 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:55:33,669 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:55:33,670 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=728046, jitterRate=-0.07424218952655792}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:55:33,670 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:55:33,670 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:55:33,670 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:55:33,670 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:55:33,670 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:55:33,670 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:55:33,671 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:55:33,671 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:55:33,677 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:55:33,677 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 11:55:33,686 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 11:55:33,697 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 11:55:33,698 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35257, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 11:55:33,700 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 11:55:33,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:33,725 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264 2023-05-27 11:55:33,726 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43439 2023-05-27 11:55:33,726 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 11:55:33,731 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:55:33,731 DEBUG [RS:0;jenkins-hbase4:46469] zookeeper.ZKUtil(162): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:33,732 WARN [RS:0;jenkins-hbase4:46469] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:55:33,732 INFO [RS:0;jenkins-hbase4:46469] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:55:33,732 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:33,733 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46469,1685188532634] 2023-05-27 11:55:33,741 DEBUG [RS:0;jenkins-hbase4:46469] zookeeper.ZKUtil(162): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:33,751 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 11:55:33,759 INFO [RS:0;jenkins-hbase4:46469] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 11:55:33,778 INFO [RS:0;jenkins-hbase4:46469] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 11:55:33,781 INFO [RS:0;jenkins-hbase4:46469] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:55:33,781 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,782 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 11:55:33,788 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,789 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,789 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,789 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,789 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,789 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,789 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:55:33,790 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,790 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,790 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,790 DEBUG [RS:0;jenkins-hbase4:46469] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:55:33,791 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,791 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,791 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,805 INFO [RS:0;jenkins-hbase4:46469] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 11:55:33,808 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46469,1685188532634-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:33,822 INFO [RS:0;jenkins-hbase4:46469] regionserver.Replication(203): jenkins-hbase4.apache.org,46469,1685188532634 started 2023-05-27 11:55:33,822 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46469,1685188532634, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46469, sessionid=0x1006c7ed4dd0001 2023-05-27 11:55:33,823 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 11:55:33,823 DEBUG [RS:0;jenkins-hbase4:46469] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:33,823 DEBUG [RS:0;jenkins-hbase4:46469] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46469,1685188532634' 2023-05-27 11:55:33,823 DEBUG [RS:0;jenkins-hbase4:46469] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:55:33,823 DEBUG [RS:0;jenkins-hbase4:46469] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:55:33,824 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 11:55:33,824 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 11:55:33,824 DEBUG [RS:0;jenkins-hbase4:46469] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:33,824 DEBUG [RS:0;jenkins-hbase4:46469] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46469,1685188532634' 2023-05-27 11:55:33,824 DEBUG [RS:0;jenkins-hbase4:46469] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 11:55:33,825 DEBUG [RS:0;jenkins-hbase4:46469] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 11:55:33,825 DEBUG [RS:0;jenkins-hbase4:46469] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 11:55:33,825 INFO [RS:0;jenkins-hbase4:46469] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 11:55:33,825 INFO [RS:0;jenkins-hbase4:46469] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 11:55:33,852 DEBUG [jenkins-hbase4:35589] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 11:55:33,854 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46469,1685188532634, state=OPENING 2023-05-27 11:55:33,861 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 11:55:33,862 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:33,863 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:55:33,866 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46469,1685188532634}] 2023-05-27 11:55:33,935 INFO [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46469%2C1685188532634, suffix=, logDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634, archiveDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/oldWALs, maxLogs=32 2023-05-27 11:55:33,949 INFO [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188533938 2023-05-27 11:55:33,949 DEBUG [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:55:34,047 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:34,050 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 11:55:34,053 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50236, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 11:55:34,066 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 11:55:34,067 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:55:34,071 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46469%2C1685188532634.meta, suffix=.meta, logDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634, archiveDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/oldWALs, maxLogs=32 2023-05-27 11:55:34,084 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.meta.1685188534072.meta 2023-05-27 11:55:34,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:55:34,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:55:34,086 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 11:55:34,101 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 11:55:34,106 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 11:55:34,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 11:55:34,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:34,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 11:55:34,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 11:55:34,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:55:34,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/info 2023-05-27 11:55:34,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/info 2023-05-27 11:55:34,116 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:55:34,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:34,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:55:34,118 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:55:34,118 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:55:34,119 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:55:34,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:34,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:55:34,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/table 2023-05-27 11:55:34,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/table 2023-05-27 11:55:34,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:55:34,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:34,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740 2023-05-27 11:55:34,126 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740 2023-05-27 11:55:34,130 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:55:34,132 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:55:34,134 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=716606, jitterRate=-0.08878882229328156}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:55:34,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:55:34,144 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685188534040 2023-05-27 11:55:34,160 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 11:55:34,161 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 11:55:34,161 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46469,1685188532634, state=OPEN 2023-05-27 11:55:34,164 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 11:55:34,164 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:55:34,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 11:55:34,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46469,1685188532634 in 298 msec 2023-05-27 11:55:34,174 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 11:55:34,174 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 484 msec 2023-05-27 11:55:34,180 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 710 msec 2023-05-27 11:55:34,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685188534180, completionTime=-1 2023-05-27 11:55:34,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 11:55:34,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 11:55:34,239 DEBUG [hconnection-0x61e07ad2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:55:34,243 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50252, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:55:34,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 11:55:34,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685188594260 2023-05-27 11:55:34,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685188654260 2023-05-27 11:55:34,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 79 msec 2023-05-27 11:55:34,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35589,1685188531494-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:34,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35589,1685188531494-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:34,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35589,1685188531494-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:34,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35589, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:34,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 11:55:34,294 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 11:55:34,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 11:55:34,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:55:34,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 11:55:34,318 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:55:34,320 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:55:34,340 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,343 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6 empty. 2023-05-27 11:55:34,343 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,344 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 11:55:34,400 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 11:55:34,402 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fd727635f7f598af8a2e081def5026e6, NAME => 'hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp 2023-05-27 11:55:34,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:34,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fd727635f7f598af8a2e081def5026e6, disabling compactions & flushes 2023-05-27 11:55:34,418 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. after waiting 0 ms 2023-05-27 11:55:34,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,418 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fd727635f7f598af8a2e081def5026e6: 2023-05-27 11:55:34,422 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:55:34,438 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188534425"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188534425"}]},"ts":"1685188534425"} 2023-05-27 11:55:34,464 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:55:34,466 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:55:34,470 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188534466"}]},"ts":"1685188534466"} 2023-05-27 11:55:34,475 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 11:55:34,484 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fd727635f7f598af8a2e081def5026e6, ASSIGN}] 2023-05-27 11:55:34,487 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fd727635f7f598af8a2e081def5026e6, ASSIGN 2023-05-27 11:55:34,489 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fd727635f7f598af8a2e081def5026e6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46469,1685188532634; forceNewPlan=false, retain=false 2023-05-27 11:55:34,640 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fd727635f7f598af8a2e081def5026e6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:34,640 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188534639"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188534639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188534639"}]},"ts":"1685188534639"} 2023-05-27 11:55:34,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure fd727635f7f598af8a2e081def5026e6, server=jenkins-hbase4.apache.org,46469,1685188532634}] 2023-05-27 11:55:34,804 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,804 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd727635f7f598af8a2e081def5026e6, NAME => 'hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:55:34,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:34,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,808 INFO [StoreOpener-fd727635f7f598af8a2e081def5026e6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,812 DEBUG [StoreOpener-fd727635f7f598af8a2e081def5026e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/info 2023-05-27 11:55:34,813 DEBUG [StoreOpener-fd727635f7f598af8a2e081def5026e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/info 2023-05-27 11:55:34,813 INFO [StoreOpener-fd727635f7f598af8a2e081def5026e6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd727635f7f598af8a2e081def5026e6 columnFamilyName info 2023-05-27 11:55:34,814 INFO [StoreOpener-fd727635f7f598af8a2e081def5026e6-1] regionserver.HStore(310): Store=fd727635f7f598af8a2e081def5026e6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:34,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fd727635f7f598af8a2e081def5026e6 2023-05-27 11:55:34,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:55:34,825 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fd727635f7f598af8a2e081def5026e6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=722686, jitterRate=-0.08105787634849548}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:55:34,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fd727635f7f598af8a2e081def5026e6: 2023-05-27 11:55:34,828 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6., pid=6, masterSystemTime=1685188534797 2023-05-27 11:55:34,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,832 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:55:34,833 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fd727635f7f598af8a2e081def5026e6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:34,834 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188534832"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188534832"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188534832"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188534832"}]},"ts":"1685188534832"} 2023-05-27 11:55:34,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 11:55:34,842 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure fd727635f7f598af8a2e081def5026e6, server=jenkins-hbase4.apache.org,46469,1685188532634 in 193 msec 2023-05-27 11:55:34,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 11:55:34,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fd727635f7f598af8a2e081def5026e6, ASSIGN in 358 msec 2023-05-27 11:55:34,847 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:55:34,848 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188534848"}]},"ts":"1685188534848"} 2023-05-27 11:55:34,851 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 11:55:34,855 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:55:34,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 549 msec 2023-05-27 11:55:34,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 11:55:34,919 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:55:34,920 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:34,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 11:55:34,979 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:55:34,985 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 34 msec 2023-05-27 11:55:34,996 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 11:55:35,008 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:55:35,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-05-27 11:55:35,023 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 11:55:35,026 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 11:55:35,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.307sec 2023-05-27 11:55:35,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 11:55:35,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 11:55:35,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 11:55:35,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35589,1685188531494-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 11:55:35,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35589,1685188531494-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 11:55:35,044 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 11:55:35,084 DEBUG [Listener at localhost/38935] zookeeper.ReadOnlyZKClient(139): Connect 0x2741d125 to 127.0.0.1:55837 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:55:35,090 DEBUG [Listener at localhost/38935] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@553f00ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:55:35,102 DEBUG [hconnection-0x243f804e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:55:35,113 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50262, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:55:35,126 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:55:35,126 INFO [Listener at localhost/38935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:55:35,133 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 11:55:35,133 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:55:35,134 INFO [Listener at localhost/38935] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 11:55:35,145 DEBUG [Listener at localhost/38935] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 11:55:35,149 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40816, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 11:55:35,158 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 11:55:35,158 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 11:55:35,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:55:35,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-27 11:55:35,166 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:55:35,168 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:55:35,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-27 11:55:35,172 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,173 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583 empty. 2023-05-27 11:55:35,175 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,175 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-27 11:55:35,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:55:35,197 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-27 11:55:35,199 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1df7a0cfe0bdd6153937d058c7f86583, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/.tmp 2023-05-27 11:55:35,212 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:35,213 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 1df7a0cfe0bdd6153937d058c7f86583, disabling compactions & flushes 2023-05-27 11:55:35,213 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,213 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,213 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. after waiting 0 ms 2023-05-27 11:55:35,213 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,213 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,213 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:55:35,216 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:55:35,218 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685188535218"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188535218"}]},"ts":"1685188535218"} 2023-05-27 11:55:35,221 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:55:35,223 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:55:35,223 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188535223"}]},"ts":"1685188535223"} 2023-05-27 11:55:35,225 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-27 11:55:35,230 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=1df7a0cfe0bdd6153937d058c7f86583, ASSIGN}] 2023-05-27 11:55:35,232 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=1df7a0cfe0bdd6153937d058c7f86583, ASSIGN 2023-05-27 11:55:35,234 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=1df7a0cfe0bdd6153937d058c7f86583, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46469,1685188532634; forceNewPlan=false, retain=false 2023-05-27 11:55:35,385 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=1df7a0cfe0bdd6153937d058c7f86583, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:35,385 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685188535385"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188535385"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188535385"}]},"ts":"1685188535385"} 2023-05-27 11:55:35,388 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 1df7a0cfe0bdd6153937d058c7f86583, server=jenkins-hbase4.apache.org,46469,1685188532634}] 2023-05-27 11:55:35,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1df7a0cfe0bdd6153937d058c7f86583, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:55:35,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:55:35,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,551 INFO [StoreOpener-1df7a0cfe0bdd6153937d058c7f86583-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,554 DEBUG [StoreOpener-1df7a0cfe0bdd6153937d058c7f86583-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info 2023-05-27 11:55:35,554 DEBUG [StoreOpener-1df7a0cfe0bdd6153937d058c7f86583-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info 2023-05-27 11:55:35,554 INFO [StoreOpener-1df7a0cfe0bdd6153937d058c7f86583-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1df7a0cfe0bdd6153937d058c7f86583 columnFamilyName info 2023-05-27 11:55:35,555 INFO [StoreOpener-1df7a0cfe0bdd6153937d058c7f86583-1] regionserver.HStore(310): Store=1df7a0cfe0bdd6153937d058c7f86583/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:55:35,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:35,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:55:35,567 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1df7a0cfe0bdd6153937d058c7f86583; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=882695, jitterRate=0.1224050521850586}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:55:35,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:55:35,568 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583., pid=11, masterSystemTime=1685188535542 2023-05-27 11:55:35,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,571 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:35,572 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=1df7a0cfe0bdd6153937d058c7f86583, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:55:35,573 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685188535572"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188535572"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188535572"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188535572"}]},"ts":"1685188535572"} 2023-05-27 11:55:35,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 11:55:35,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 1df7a0cfe0bdd6153937d058c7f86583, server=jenkins-hbase4.apache.org,46469,1685188532634 in 188 msec 2023-05-27 11:55:35,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 11:55:35,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=1df7a0cfe0bdd6153937d058c7f86583, ASSIGN in 350 msec 2023-05-27 11:55:35,586 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:55:35,586 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188535586"}]},"ts":"1685188535586"} 2023-05-27 11:55:35,588 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-27 11:55:35,592 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:55:35,594 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 430 msec 2023-05-27 11:55:39,621 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-27 11:55:39,756 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 11:55:39,757 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 11:55:39,759 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-27 11:55:41,646 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 11:55:41,647 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-27 11:55:45,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35589] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:55:45,189 INFO [Listener at localhost/38935] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-27 11:55:45,193 DEBUG [Listener at localhost/38935] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-27 11:55:45,194 DEBUG [Listener at localhost/38935] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:55:57,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46469] regionserver.HRegion(9158): Flush requested on 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:55:57,220 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1df7a0cfe0bdd6153937d058c7f86583 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:55:57,287 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/b28dc9bd3ee9414da28fc295dab9d63a 2023-05-27 11:55:57,328 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/b28dc9bd3ee9414da28fc295dab9d63a as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a 2023-05-27 11:55:57,338 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a, entries=7, sequenceid=11, filesize=12.1 K 2023-05-27 11:55:57,341 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 1df7a0cfe0bdd6153937d058c7f86583 in 120ms, sequenceid=11, compaction requested=false 2023-05-27 11:55:57,342 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:56:05,431 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:07,634 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:09,837 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:12,040 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:12,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46469] regionserver.HRegion(9158): Flush requested on 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:56:12,040 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1df7a0cfe0bdd6153937d058c7f86583 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:56:12,242 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:12,260 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/2067f954a4fb4621948dfd1d42488645 2023-05-27 11:56:12,269 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/2067f954a4fb4621948dfd1d42488645 as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/2067f954a4fb4621948dfd1d42488645 2023-05-27 11:56:12,278 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/2067f954a4fb4621948dfd1d42488645, entries=7, sequenceid=21, filesize=12.1 K 2023-05-27 11:56:12,479 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:12,480 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 1df7a0cfe0bdd6153937d058c7f86583 in 439ms, sequenceid=21, compaction requested=false 2023-05-27 11:56:12,480 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:56:12,480 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-27 11:56:12,480 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:56:12,481 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a because midkey is the same as first or last row 2023-05-27 11:56:14,243 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:16,445 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:16,447 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46469%2C1685188532634:(num 1685188533938) roll requested 2023-05-27 11:56:16,447 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:16,659 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:16,660 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188533938 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188576447 2023-05-27 11:56:16,660 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:16,661 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188533938 is not closed yet, will try archiving it next time 2023-05-27 11:56:26,459 INFO [Listener at localhost/38935] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-27 11:56:31,461 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:31,462 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:31,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46469] regionserver.HRegion(9158): Flush requested on 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:56:31,462 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46469%2C1685188532634:(num 1685188576447) roll requested 2023-05-27 11:56:31,462 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1df7a0cfe0bdd6153937d058c7f86583 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:56:33,463 INFO [Listener at localhost/38935] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-27 11:56:36,463 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:36,464 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:36,480 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/762a7a945a5d42939200a66e347b5c14 2023-05-27 11:56:36,481 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:36,481 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK], DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK]] 2023-05-27 11:56:36,482 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188576447 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188591462 2023-05-27 11:56:36,482 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34597,DS-d855bfa6-11fd-4f5e-b937-a1761ce032f7,DISK], DatanodeInfoWithStorage[127.0.0.1:46147,DS-9a34859d-d2e7-48ef-81e4-2cb9c3692630,DISK]] 2023-05-27 11:56:36,482 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634/jenkins-hbase4.apache.org%2C46469%2C1685188532634.1685188576447 is not closed yet, will try archiving it next time 2023-05-27 11:56:36,492 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/762a7a945a5d42939200a66e347b5c14 as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/762a7a945a5d42939200a66e347b5c14 2023-05-27 11:56:36,501 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/762a7a945a5d42939200a66e347b5c14, entries=7, sequenceid=31, filesize=12.1 K 2023-05-27 11:56:36,504 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 1df7a0cfe0bdd6153937d058c7f86583 in 5042ms, sequenceid=31, compaction requested=true 2023-05-27 11:56:36,504 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:56:36,505 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-27 11:56:36,505 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:56:36,505 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a because midkey is the same as first or last row 2023-05-27 11:56:36,507 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 11:56:36,508 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 11:56:36,512 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 11:56:36,514 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.HStore(1912): 1df7a0cfe0bdd6153937d058c7f86583/info is initiating minor compaction (all files) 2023-05-27 11:56:36,514 INFO [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1df7a0cfe0bdd6153937d058c7f86583/info in TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:56:36,515 INFO [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/2067f954a4fb4621948dfd1d42488645, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/762a7a945a5d42939200a66e347b5c14] into tmpdir=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp, totalSize=36.3 K 2023-05-27 11:56:36,516 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] compactions.Compactor(207): Compacting b28dc9bd3ee9414da28fc295dab9d63a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685188545198 2023-05-27 11:56:36,517 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] compactions.Compactor(207): Compacting 2067f954a4fb4621948dfd1d42488645, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685188559221 2023-05-27 11:56:36,517 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] compactions.Compactor(207): Compacting 762a7a945a5d42939200a66e347b5c14, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685188574042 2023-05-27 11:56:36,541 INFO [RS:0;jenkins-hbase4:46469-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1df7a0cfe0bdd6153937d058c7f86583#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 11:56:36,560 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/beb7bc5b4cc945dab9346b45e9919772 as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/beb7bc5b4cc945dab9346b45e9919772 2023-05-27 11:56:36,576 INFO [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1df7a0cfe0bdd6153937d058c7f86583/info of 1df7a0cfe0bdd6153937d058c7f86583 into beb7bc5b4cc945dab9346b45e9919772(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 11:56:36,576 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:56:36,576 INFO [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583., storeName=1df7a0cfe0bdd6153937d058c7f86583/info, priority=13, startTime=1685188596507; duration=0sec 2023-05-27 11:56:36,577 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-27 11:56:36,577 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:56:36,577 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/beb7bc5b4cc945dab9346b45e9919772 because midkey is the same as first or last row 2023-05-27 11:56:36,578 DEBUG [RS:0;jenkins-hbase4:46469-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 11:56:48,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46469] regionserver.HRegion(9158): Flush requested on 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:56:48,584 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1df7a0cfe0bdd6153937d058c7f86583 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:56:48,600 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/6b09978585de4fb4aa19c0162617408e 2023-05-27 11:56:48,608 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/6b09978585de4fb4aa19c0162617408e as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/6b09978585de4fb4aa19c0162617408e 2023-05-27 11:56:48,615 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/6b09978585de4fb4aa19c0162617408e, entries=7, sequenceid=42, filesize=12.1 K 2023-05-27 11:56:48,616 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 1df7a0cfe0bdd6153937d058c7f86583 in 32ms, sequenceid=42, compaction requested=false 2023-05-27 11:56:48,616 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:56:48,616 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-27 11:56:48,616 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:56:48,616 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/beb7bc5b4cc945dab9346b45e9919772 because midkey is the same as first or last row 2023-05-27 11:56:56,592 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 11:56:56,593 INFO [Listener at localhost/38935] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 11:56:56,593 DEBUG [Listener at localhost/38935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2741d125 to 127.0.0.1:55837 2023-05-27 11:56:56,593 DEBUG [Listener at localhost/38935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:56:56,594 DEBUG [Listener at localhost/38935] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 11:56:56,594 DEBUG [Listener at localhost/38935] util.JVMClusterUtil(257): Found active master hash=612685945, stopped=false 2023-05-27 11:56:56,594 INFO [Listener at localhost/38935] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:56:56,596 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:56:56,596 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:56,596 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:56:56,597 INFO [Listener at localhost/38935] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 11:56:56,597 DEBUG [Listener at localhost/38935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b1e46b2 to 127.0.0.1:55837 2023-05-27 11:56:56,597 DEBUG [Listener at localhost/38935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:56:56,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:56:56,598 INFO [Listener at localhost/38935] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46469,1685188532634' ***** 2023-05-27 11:56:56,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:56:56,598 INFO [Listener at localhost/38935] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 11:56:56,598 INFO [RS:0;jenkins-hbase4:46469] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 11:56:56,599 INFO [RS:0;jenkins-hbase4:46469] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 11:56:56,599 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 11:56:56,599 INFO [RS:0;jenkins-hbase4:46469] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 11:56:56,599 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(3303): Received CLOSE for fd727635f7f598af8a2e081def5026e6 2023-05-27 11:56:56,600 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(3303): Received CLOSE for 1df7a0cfe0bdd6153937d058c7f86583 2023-05-27 11:56:56,600 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:56:56,600 DEBUG [RS:0;jenkins-hbase4:46469] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e6f5d42 to 127.0.0.1:55837 2023-05-27 11:56:56,600 DEBUG [RS:0;jenkins-hbase4:46469] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:56:56,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fd727635f7f598af8a2e081def5026e6, disabling compactions & flushes 2023-05-27 11:56:56,601 INFO [RS:0;jenkins-hbase4:46469] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 11:56:56,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:56:56,601 INFO [RS:0;jenkins-hbase4:46469] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 11:56:56,601 INFO [RS:0;jenkins-hbase4:46469] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 11:56:56,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:56:56,601 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 11:56:56,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. after waiting 0 ms 2023-05-27 11:56:56,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:56:56,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fd727635f7f598af8a2e081def5026e6 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 11:56:56,601 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-27 11:56:56,602 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1478): Online Regions={fd727635f7f598af8a2e081def5026e6=hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6., 1df7a0cfe0bdd6153937d058c7f86583=TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583., 1588230740=hbase:meta,,1.1588230740} 2023-05-27 11:56:56,602 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:56:56,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:56:56,602 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:56:56,602 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:56:56,602 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:56:56,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-27 11:56:56,603 DEBUG [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1504): Waiting on 1588230740, 1df7a0cfe0bdd6153937d058c7f86583, fd727635f7f598af8a2e081def5026e6 2023-05-27 11:56:56,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/.tmp/info/a5d809f1d14f451f8d221f636e9f107a 2023-05-27 11:56:56,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/.tmp/info/17cd57a3227b465a845f63690e5e3814 2023-05-27 11:56:56,642 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/.tmp/info/a5d809f1d14f451f8d221f636e9f107a as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/info/a5d809f1d14f451f8d221f636e9f107a 2023-05-27 11:56:56,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/info/a5d809f1d14f451f8d221f636e9f107a, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 11:56:56,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for fd727635f7f598af8a2e081def5026e6 in 54ms, sequenceid=6, compaction requested=false 2023-05-27 11:56:56,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/.tmp/table/4063bbdff64a4fc69762ee9f7340c60c 2023-05-27 11:56:56,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/namespace/fd727635f7f598af8a2e081def5026e6/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 11:56:56,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:56:56,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fd727635f7f598af8a2e081def5026e6: 2023-05-27 11:56:56,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685188534304.fd727635f7f598af8a2e081def5026e6. 2023-05-27 11:56:56,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1df7a0cfe0bdd6153937d058c7f86583, disabling compactions & flushes 2023-05-27 11:56:56,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:56:56,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:56:56,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. after waiting 0 ms 2023-05-27 11:56:56,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:56:56,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1df7a0cfe0bdd6153937d058c7f86583 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-27 11:56:56,667 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/.tmp/info/17cd57a3227b465a845f63690e5e3814 as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/info/17cd57a3227b465a845f63690e5e3814 2023-05-27 11:56:56,680 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/info/17cd57a3227b465a845f63690e5e3814, entries=20, sequenceid=14, filesize=7.4 K 2023-05-27 11:56:56,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/5d8f597ca0a34c0598e3b5ec08076e91 2023-05-27 11:56:56,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/.tmp/table/4063bbdff64a4fc69762ee9f7340c60c as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/table/4063bbdff64a4fc69762ee9f7340c60c 2023-05-27 11:56:56,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/.tmp/info/5d8f597ca0a34c0598e3b5ec08076e91 as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/5d8f597ca0a34c0598e3b5ec08076e91 2023-05-27 11:56:56,688 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/table/4063bbdff64a4fc69762ee9f7340c60c, entries=4, sequenceid=14, filesize=4.8 K 2023-05-27 11:56:56,690 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 88ms, sequenceid=14, compaction requested=false 2023-05-27 11:56:56,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/5d8f597ca0a34c0598e3b5ec08076e91, entries=3, sequenceid=48, filesize=7.9 K 2023-05-27 11:56:56,700 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-27 11:56:56,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 1df7a0cfe0bdd6153937d058c7f86583 in 35ms, sequenceid=48, compaction requested=true 2023-05-27 11:56:56,703 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/2067f954a4fb4621948dfd1d42488645, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/762a7a945a5d42939200a66e347b5c14] to archive 2023-05-27 11:56:56,704 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 11:56:56,706 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:56:56,706 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:56:56,706 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 11:56:56,709 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 11:56:56,718 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/archive/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/b28dc9bd3ee9414da28fc295dab9d63a 2023-05-27 11:56:56,721 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/2067f954a4fb4621948dfd1d42488645 to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/archive/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/2067f954a4fb4621948dfd1d42488645 2023-05-27 11:56:56,723 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/762a7a945a5d42939200a66e347b5c14 to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/archive/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/info/762a7a945a5d42939200a66e347b5c14 2023-05-27 11:56:56,768 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/data/default/TestLogRolling-testSlowSyncLogRolling/1df7a0cfe0bdd6153937d058c7f86583/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-27 11:56:56,771 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:56:56,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1df7a0cfe0bdd6153937d058c7f86583: 2023-05-27 11:56:56,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685188535158.1df7a0cfe0bdd6153937d058c7f86583. 2023-05-27 11:56:56,804 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46469,1685188532634; all regions closed. 2023-05-27 11:56:56,805 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:56:56,813 DEBUG [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/oldWALs 2023-05-27 11:56:56,814 INFO [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46469%2C1685188532634.meta:.meta(num 1685188534072) 2023-05-27 11:56:56,814 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/WALs/jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:56:56,824 DEBUG [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/oldWALs 2023-05-27 11:56:56,824 INFO [RS:0;jenkins-hbase4:46469] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46469%2C1685188532634:(num 1685188591462) 2023-05-27 11:56:56,825 DEBUG [RS:0;jenkins-hbase4:46469] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:56:56,825 INFO [RS:0;jenkins-hbase4:46469] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:56:56,825 INFO [RS:0;jenkins-hbase4:46469] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-27 11:56:56,825 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:56:56,826 INFO [RS:0;jenkins-hbase4:46469] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46469 2023-05-27 11:56:56,832 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:56:56,832 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46469,1685188532634 2023-05-27 11:56:56,832 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:56:56,832 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46469,1685188532634] 2023-05-27 11:56:56,833 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46469,1685188532634; numProcessing=1 2023-05-27 11:56:56,835 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46469,1685188532634 already deleted, retry=false 2023-05-27 11:56:56,835 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46469,1685188532634 expired; onlineServers=0 2023-05-27 11:56:56,835 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35589,1685188531494' ***** 2023-05-27 11:56:56,835 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 11:56:56,836 DEBUG [M:0;jenkins-hbase4:35589] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65b9d242, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:56:56,836 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:56:56,836 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35589,1685188531494; all regions closed. 2023-05-27 11:56:56,836 DEBUG [M:0;jenkins-hbase4:35589] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:56:56,836 DEBUG [M:0;jenkins-hbase4:35589] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 11:56:56,836 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 11:56:56,836 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188533592] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188533592,5,FailOnTimeoutGroup] 2023-05-27 11:56:56,836 DEBUG [M:0;jenkins-hbase4:35589] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 11:56:56,836 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188533595] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188533595,5,FailOnTimeoutGroup] 2023-05-27 11:56:56,838 INFO [M:0;jenkins-hbase4:35589] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 11:56:56,838 INFO [M:0;jenkins-hbase4:35589] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 11:56:56,838 INFO [M:0;jenkins-hbase4:35589] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 11:56:56,839 DEBUG [M:0;jenkins-hbase4:35589] master.HMaster(1512): Stopping service threads 2023-05-27 11:56:56,839 INFO [M:0;jenkins-hbase4:35589] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 11:56:56,839 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 11:56:56,839 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:56,840 INFO [M:0;jenkins-hbase4:35589] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 11:56:56,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:56:56,840 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 11:56:56,840 DEBUG [M:0;jenkins-hbase4:35589] zookeeper.ZKUtil(398): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 11:56:56,840 WARN [M:0;jenkins-hbase4:35589] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 11:56:56,840 INFO [M:0;jenkins-hbase4:35589] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 11:56:56,841 INFO [M:0;jenkins-hbase4:35589] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 11:56:56,841 DEBUG [M:0;jenkins-hbase4:35589] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:56:56,841 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:56,841 DEBUG [M:0;jenkins-hbase4:35589] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:56,841 DEBUG [M:0;jenkins-hbase4:35589] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:56:56,841 DEBUG [M:0;jenkins-hbase4:35589] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:56,841 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.28 KB heapSize=46.71 KB 2023-05-27 11:56:56,856 INFO [M:0;jenkins-hbase4:35589] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.28 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9e24f96badcc4328b7d64d1ca187c64f 2023-05-27 11:56:56,863 INFO [M:0;jenkins-hbase4:35589] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e24f96badcc4328b7d64d1ca187c64f 2023-05-27 11:56:56,864 DEBUG [M:0;jenkins-hbase4:35589] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9e24f96badcc4328b7d64d1ca187c64f as hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9e24f96badcc4328b7d64d1ca187c64f 2023-05-27 11:56:56,871 INFO [M:0;jenkins-hbase4:35589] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e24f96badcc4328b7d64d1ca187c64f 2023-05-27 11:56:56,872 INFO [M:0;jenkins-hbase4:35589] regionserver.HStore(1080): Added hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9e24f96badcc4328b7d64d1ca187c64f, entries=11, sequenceid=100, filesize=6.1 K 2023-05-27 11:56:56,873 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegion(2948): Finished flush of dataSize ~38.28 KB/39196, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=100, compaction requested=false 2023-05-27 11:56:56,874 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:56,874 DEBUG [M:0;jenkins-hbase4:35589] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:56:56,875 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/MasterData/WALs/jenkins-hbase4.apache.org,35589,1685188531494 2023-05-27 11:56:56,881 INFO [M:0;jenkins-hbase4:35589] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 11:56:56,880 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:56:56,881 INFO [M:0;jenkins-hbase4:35589] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35589 2023-05-27 11:56:56,884 DEBUG [M:0;jenkins-hbase4:35589] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35589,1685188531494 already deleted, retry=false 2023-05-27 11:56:56,934 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:56:56,934 INFO [RS:0;jenkins-hbase4:46469] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46469,1685188532634; zookeeper connection closed. 2023-05-27 11:56:56,934 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): regionserver:46469-0x1006c7ed4dd0001, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:56:56,935 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5fc6573e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5fc6573e 2023-05-27 11:56:56,935 INFO [Listener at localhost/38935] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 11:56:57,034 INFO [M:0;jenkins-hbase4:35589] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35589,1685188531494; zookeeper connection closed. 2023-05-27 11:56:57,034 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:56:57,035 DEBUG [Listener at localhost/38935-EventThread] zookeeper.ZKWatcher(600): master:35589-0x1006c7ed4dd0000, quorum=127.0.0.1:55837, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:56:57,036 WARN [Listener at localhost/38935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:56:57,040 INFO [Listener at localhost/38935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:56:57,146 WARN [BP-436074741-172.31.14.131-1685188528501 heartbeating to localhost/127.0.0.1:43439] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:56:57,146 WARN [BP-436074741-172.31.14.131-1685188528501 heartbeating to localhost/127.0.0.1:43439] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-436074741-172.31.14.131-1685188528501 (Datanode Uuid c88aa5a4-3dea-4b8e-b72b-595ef9242720) service to localhost/127.0.0.1:43439 2023-05-27 11:56:57,148 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/dfs/data/data3/current/BP-436074741-172.31.14.131-1685188528501] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:56:57,148 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/dfs/data/data4/current/BP-436074741-172.31.14.131-1685188528501] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:56:57,150 WARN [Listener at localhost/38935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:56:57,155 INFO [Listener at localhost/38935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:56:57,161 WARN [BP-436074741-172.31.14.131-1685188528501 heartbeating to localhost/127.0.0.1:43439] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:56:57,161 WARN [BP-436074741-172.31.14.131-1685188528501 heartbeating to localhost/127.0.0.1:43439] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-436074741-172.31.14.131-1685188528501 (Datanode Uuid 3c8f5b42-f01e-45db-b4fa-c833a148e662) service to localhost/127.0.0.1:43439 2023-05-27 11:56:57,162 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/dfs/data/data1/current/BP-436074741-172.31.14.131-1685188528501] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:56:57,162 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/cluster_1e6655c1-202e-5daa-c051-cc7a1eba33ab/dfs/data/data2/current/BP-436074741-172.31.14.131-1685188528501] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:56:57,205 INFO [Listener at localhost/38935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:56:57,319 INFO [Listener at localhost/38935] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 11:56:57,363 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 11:56:57,375 INFO [Listener at localhost/38935] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:43439 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43439 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:43439 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@a51b160 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38935 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:43439 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:43439 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=444 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=113 (was 279), ProcessCount=169 (was 171), AvailableMemoryMB=4575 (was 5144) 2023-05-27 11:56:57,385 INFO [Listener at localhost/38935] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=444, MaxFileDescriptor=60000, SystemLoadAverage=113, ProcessCount=169, AvailableMemoryMB=4575 2023-05-27 11:56:57,386 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 11:56:57,386 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/hadoop.log.dir so I do NOT create it in target/test-data/b1c50085-a102-9218-001a-9e1036712df8 2023-05-27 11:56:57,386 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/23891b20-4c9e-0ca3-e13d-07d342b636a3/hadoop.tmp.dir so I do NOT create it in target/test-data/b1c50085-a102-9218-001a-9e1036712df8 2023-05-27 11:56:57,386 INFO [Listener at localhost/38935] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9, deleteOnExit=true 2023-05-27 11:56:57,386 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 11:56:57,387 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/test.cache.data in system properties and HBase conf 2023-05-27 11:56:57,387 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 11:56:57,387 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/hadoop.log.dir in system properties and HBase conf 2023-05-27 11:56:57,387 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 11:56:57,387 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 11:56:57,387 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 11:56:57,387 DEBUG [Listener at localhost/38935] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 11:56:57,388 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:56:57,388 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:56:57,388 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 11:56:57,388 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:56:57,388 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 11:56:57,388 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/nfs.dump.dir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 11:56:57,389 INFO [Listener at localhost/38935] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 11:56:57,391 WARN [Listener at localhost/38935] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:56:57,394 WARN [Listener at localhost/38935] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:56:57,394 WARN [Listener at localhost/38935] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:56:57,435 WARN [Listener at localhost/38935] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:56:57,437 INFO [Listener at localhost/38935] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:56:57,441 INFO [Listener at localhost/38935] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_33229_hdfs____4yxfsz/webapp 2023-05-27 11:56:57,532 INFO [Listener at localhost/38935] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33229 2023-05-27 11:56:57,534 WARN [Listener at localhost/38935] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:56:57,538 WARN [Listener at localhost/38935] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:56:57,538 WARN [Listener at localhost/38935] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:56:57,580 WARN [Listener at localhost/35539] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:56:57,590 WARN [Listener at localhost/35539] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:56:57,593 WARN [Listener at localhost/35539] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:56:57,594 INFO [Listener at localhost/35539] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:56:57,598 INFO [Listener at localhost/35539] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_34357_datanode____r1h4aj/webapp 2023-05-27 11:56:57,692 INFO [Listener at localhost/35539] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34357 2023-05-27 11:56:57,706 WARN [Listener at localhost/46479] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:56:57,757 WARN [Listener at localhost/46479] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:56:57,760 WARN [Listener at localhost/46479] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:56:57,761 INFO [Listener at localhost/46479] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:56:57,767 INFO [Listener at localhost/46479] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_43395_datanode____y4jjpz/webapp 2023-05-27 11:56:57,794 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:56:57,916 INFO [Listener at localhost/46479] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43395 2023-05-27 11:56:57,987 WARN [Listener at localhost/43037] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:56:58,004 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfb54b0696694065: Processing first storage report for DS-becc151e-14aa-4092-a53a-0ff7c9b1e275 from datanode 05a04d9d-a993-4bbd-8463-b580992e8891 2023-05-27 11:56:58,004 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfb54b0696694065: from storage DS-becc151e-14aa-4092-a53a-0ff7c9b1e275 node DatanodeRegistration(127.0.0.1:34917, datanodeUuid=05a04d9d-a993-4bbd-8463-b580992e8891, infoPort=41311, infoSecurePort=0, ipcPort=46479, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:56:58,004 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfb54b0696694065: Processing first storage report for DS-3bdd3878-f2e4-4708-b234-fab655681abe from datanode 05a04d9d-a993-4bbd-8463-b580992e8891 2023-05-27 11:56:58,004 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfb54b0696694065: from storage DS-3bdd3878-f2e4-4708-b234-fab655681abe node DatanodeRegistration(127.0.0.1:34917, datanodeUuid=05a04d9d-a993-4bbd-8463-b580992e8891, infoPort=41311, infoSecurePort=0, ipcPort=46479, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:56:58,099 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef8dbf70e4575c28: Processing first storage report for DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8 from datanode a538dac3-b48d-41b5-a6f1-059b3cadc615 2023-05-27 11:56:58,100 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef8dbf70e4575c28: from storage DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8 node DatanodeRegistration(127.0.0.1:40121, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=36033, infoSecurePort=0, ipcPort=43037, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:56:58,100 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef8dbf70e4575c28: Processing first storage report for DS-23d4ac92-3d2c-4f84-bbeb-d09d0b242b39 from datanode a538dac3-b48d-41b5-a6f1-059b3cadc615 2023-05-27 11:56:58,100 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef8dbf70e4575c28: from storage DS-23d4ac92-3d2c-4f84-bbeb-d09d0b242b39 node DatanodeRegistration(127.0.0.1:40121, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=36033, infoSecurePort=0, ipcPort=43037, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:56:58,107 DEBUG [Listener at localhost/43037] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8 2023-05-27 11:56:58,111 INFO [Listener at localhost/43037] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/zookeeper_0, clientPort=49196, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 11:56:58,112 INFO [Listener at localhost/43037] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49196 2023-05-27 11:56:58,113 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,114 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,135 INFO [Listener at localhost/43037] util.FSUtils(471): Created version file at hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99 with version=8 2023-05-27 11:56:58,136 INFO [Listener at localhost/43037] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase-staging 2023-05-27 11:56:58,138 INFO [Listener at localhost/43037] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:56:58,139 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:58,139 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:58,139 INFO [Listener at localhost/43037] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:56:58,139 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:58,139 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:56:58,139 INFO [Listener at localhost/43037] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:56:58,141 INFO [Listener at localhost/43037] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36001 2023-05-27 11:56:58,142 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,143 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,145 INFO [Listener at localhost/43037] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36001 connecting to ZooKeeper ensemble=127.0.0.1:49196 2023-05-27 11:56:58,153 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:360010x0, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:56:58,167 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36001-0x1006c802a810000 connected 2023-05-27 11:56:58,204 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:56:58,204 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:56:58,206 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:56:58,211 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36001 2023-05-27 11:56:58,211 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36001 2023-05-27 11:56:58,212 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36001 2023-05-27 11:56:58,214 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36001 2023-05-27 11:56:58,214 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36001 2023-05-27 11:56:58,215 INFO [Listener at localhost/43037] master.HMaster(444): hbase.rootdir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99, hbase.cluster.distributed=false 2023-05-27 11:56:58,237 INFO [Listener at localhost/43037] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:56:58,238 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:58,238 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:58,238 INFO [Listener at localhost/43037] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:56:58,238 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:58,238 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:56:58,238 INFO [Listener at localhost/43037] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:56:58,241 INFO [Listener at localhost/43037] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44867 2023-05-27 11:56:58,241 INFO [Listener at localhost/43037] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 11:56:58,242 DEBUG [Listener at localhost/43037] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 11:56:58,243 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,244 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,245 INFO [Listener at localhost/43037] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44867 connecting to ZooKeeper ensemble=127.0.0.1:49196 2023-05-27 11:56:58,248 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:448670x0, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:56:58,249 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44867-0x1006c802a810001 connected 2023-05-27 11:56:58,249 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:56:58,250 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:56:58,250 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:56:58,251 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44867 2023-05-27 11:56:58,251 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44867 2023-05-27 11:56:58,252 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44867 2023-05-27 11:56:58,252 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44867 2023-05-27 11:56:58,252 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44867 2023-05-27 11:56:58,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,254 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:56:58,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,256 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:56:58,256 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:56:58,257 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:56:58,258 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:56:58,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36001,1685188618137 from backup master directory 2023-05-27 11:56:58,261 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,261 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:56:58,261 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:56:58,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/hbase.id with ID: a082232c-5006-4d32-8839-b18621c9e6bc 2023-05-27 11:56:58,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:58,294 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,307 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7f452336 to 127.0.0.1:49196 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:56:58,310 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cee5622, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:56:58,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:56:58,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 11:56:58,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:56:58,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store-tmp 2023-05-27 11:56:58,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:58,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:56:58,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:58,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:58,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:56:58,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:58,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:56:58,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:56:58,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36001%2C1685188618137, suffix=, logDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137, archiveDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/oldWALs, maxLogs=10 2023-05-27 11:56:58,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137/jenkins-hbase4.apache.org%2C36001%2C1685188618137.1685188618326 2023-05-27 11:56:58,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK], DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]] 2023-05-27 11:56:58,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:56:58,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:58,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:56:58,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:56:58,335 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:56:58,336 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 11:56:58,337 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 11:56:58,337 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:56:58,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:56:58,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:56:58,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:56:58,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=754264, jitterRate=-0.04090411961078644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:56:58,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:56:58,346 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 11:56:58,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 11:56:58,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 11:56:58,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 11:56:58,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 11:56:58,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 11:56:58,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 11:56:58,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 11:56:58,354 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 11:56:58,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 11:56:58,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 11:56:58,372 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 11:56:58,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 11:56:58,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 11:56:58,375 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 11:56:58,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 11:56:58,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 11:56:58,380 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:56:58,380 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:56:58,380 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36001,1685188618137, sessionid=0x1006c802a810000, setting cluster-up flag (Was=false) 2023-05-27 11:56:58,384 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 11:56:58,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,395 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,401 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 11:56:58,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:58,403 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.hbase-snapshot/.tmp 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:56:58,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685188648408 2023-05-27 11:56:58,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 11:56:58,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 11:56:58,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 11:56:58,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 11:56:58,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 11:56:58,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 11:56:58,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,412 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:56:58,412 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 11:56:58,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 11:56:58,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 11:56:58,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 11:56:58,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 11:56:58,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 11:56:58,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188618413,5,FailOnTimeoutGroup] 2023-05-27 11:56:58,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188618413,5,FailOnTimeoutGroup] 2023-05-27 11:56:58,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 11:56:58,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,414 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:56:58,427 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:56:58,428 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:56:58,428 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99 2023-05-27 11:56:58,438 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:58,439 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:56:58,441 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/info 2023-05-27 11:56:58,442 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:56:58,442 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:56:58,444 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:56:58,444 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:56:58,445 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,445 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:56:58,446 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/table 2023-05-27 11:56:58,447 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:56:58,447 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,448 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740 2023-05-27 11:56:58,449 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740 2023-05-27 11:56:58,451 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:56:58,452 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:56:58,454 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(951): ClusterId : a082232c-5006-4d32-8839-b18621c9e6bc 2023-05-27 11:56:58,455 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 11:56:58,456 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:56:58,456 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=773415, jitterRate=-0.01655237376689911}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:56:58,456 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:56:58,457 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:56:58,457 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:56:58,457 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:56:58,457 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:56:58,457 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:56:58,457 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 11:56:58,457 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 11:56:58,457 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:56:58,457 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:56:58,459 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:56:58,459 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 11:56:58,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 11:56:58,460 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 11:56:58,461 DEBUG [RS:0;jenkins-hbase4:44867] zookeeper.ReadOnlyZKClient(139): Connect 0x6688172e to 127.0.0.1:49196 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:56:58,466 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 11:56:58,469 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 11:56:58,470 DEBUG [RS:0;jenkins-hbase4:44867] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@684ff58f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:56:58,471 DEBUG [RS:0;jenkins-hbase4:44867] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c707c5e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:56:58,480 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44867 2023-05-27 11:56:58,480 INFO [RS:0;jenkins-hbase4:44867] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 11:56:58,480 INFO [RS:0;jenkins-hbase4:44867] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 11:56:58,480 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 11:56:58,481 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36001,1685188618137 with isa=jenkins-hbase4.apache.org/172.31.14.131:44867, startcode=1685188618236 2023-05-27 11:56:58,481 DEBUG [RS:0;jenkins-hbase4:44867] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 11:56:58,485 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40153, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 11:56:58,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,486 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99 2023-05-27 11:56:58,486 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35539 2023-05-27 11:56:58,486 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 11:56:58,488 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:56:58,488 DEBUG [RS:0;jenkins-hbase4:44867] zookeeper.ZKUtil(162): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,489 WARN [RS:0;jenkins-hbase4:44867] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:56:58,489 INFO [RS:0;jenkins-hbase4:44867] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:56:58,489 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,489 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44867,1685188618236] 2023-05-27 11:56:58,492 DEBUG [RS:0;jenkins-hbase4:44867] zookeeper.ZKUtil(162): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,493 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 11:56:58,493 INFO [RS:0;jenkins-hbase4:44867] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 11:56:58,497 INFO [RS:0;jenkins-hbase4:44867] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 11:56:58,499 INFO [RS:0;jenkins-hbase4:44867] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:56:58,499 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,499 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 11:56:58,500 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,500 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,500 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,500 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,500 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,500 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,501 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:56:58,501 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,501 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,501 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,501 DEBUG [RS:0;jenkins-hbase4:44867] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:58,502 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,502 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,502 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,514 INFO [RS:0;jenkins-hbase4:44867] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 11:56:58,514 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44867,1685188618236-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,532 INFO [RS:0;jenkins-hbase4:44867] regionserver.Replication(203): jenkins-hbase4.apache.org,44867,1685188618236 started 2023-05-27 11:56:58,532 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44867,1685188618236, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44867, sessionid=0x1006c802a810001 2023-05-27 11:56:58,532 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 11:56:58,532 DEBUG [RS:0;jenkins-hbase4:44867] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,532 DEBUG [RS:0;jenkins-hbase4:44867] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44867,1685188618236' 2023-05-27 11:56:58,532 DEBUG [RS:0;jenkins-hbase4:44867] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:56:58,533 DEBUG [RS:0;jenkins-hbase4:44867] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:56:58,533 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 11:56:58,533 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 11:56:58,533 DEBUG [RS:0;jenkins-hbase4:44867] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,533 DEBUG [RS:0;jenkins-hbase4:44867] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44867,1685188618236' 2023-05-27 11:56:58,533 DEBUG [RS:0;jenkins-hbase4:44867] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 11:56:58,534 DEBUG [RS:0;jenkins-hbase4:44867] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 11:56:58,534 DEBUG [RS:0;jenkins-hbase4:44867] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 11:56:58,534 INFO [RS:0;jenkins-hbase4:44867] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 11:56:58,534 INFO [RS:0;jenkins-hbase4:44867] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 11:56:58,619 DEBUG [jenkins-hbase4:36001] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 11:56:58,620 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44867,1685188618236, state=OPENING 2023-05-27 11:56:58,621 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 11:56:58,624 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:58,625 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:56:58,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44867,1685188618236}] 2023-05-27 11:56:58,636 INFO [RS:0;jenkins-hbase4:44867] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44867%2C1685188618236, suffix=, logDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236, archiveDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/oldWALs, maxLogs=32 2023-05-27 11:56:58,649 INFO [RS:0;jenkins-hbase4:44867] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.1685188618638 2023-05-27 11:56:58,649 DEBUG [RS:0;jenkins-hbase4:44867] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK], DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]] 2023-05-27 11:56:58,779 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:58,780 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 11:56:58,782 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 11:56:58,787 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 11:56:58,787 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:56:58,789 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta, suffix=.meta, logDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236, archiveDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/oldWALs, maxLogs=32 2023-05-27 11:56:58,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta.1685188618790.meta 2023-05-27 11:56:58,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK], DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] 2023-05-27 11:56:58,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:56:58,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 11:56:58,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 11:56:58,802 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 11:56:58,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 11:56:58,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:58,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 11:56:58,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 11:56:58,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:56:58,805 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/info 2023-05-27 11:56:58,805 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/info 2023-05-27 11:56:58,806 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:56:58,806 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,806 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:56:58,807 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:56:58,807 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:56:58,808 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:56:58,808 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,808 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:56:58,810 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/table 2023-05-27 11:56:58,810 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740/table 2023-05-27 11:56:58,811 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:56:58,811 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:58,813 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740 2023-05-27 11:56:58,814 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/meta/1588230740 2023-05-27 11:56:58,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:56:58,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:56:58,819 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=857865, jitterRate=0.09083199501037598}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:56:58,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:56:58,820 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685188618779 2023-05-27 11:56:58,824 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 11:56:58,824 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 11:56:58,825 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44867,1685188618236, state=OPEN 2023-05-27 11:56:58,827 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 11:56:58,827 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:56:58,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 11:56:58,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44867,1685188618236 in 202 msec 2023-05-27 11:56:58,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 11:56:58,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 371 msec 2023-05-27 11:56:58,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 429 msec 2023-05-27 11:56:58,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685188618835, completionTime=-1 2023-05-27 11:56:58,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 11:56:58,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 11:56:58,837 DEBUG [hconnection-0x583eb081-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:56:58,840 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:56:58,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 11:56:58,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685188678841 2023-05-27 11:56:58,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685188738841 2023-05-27 11:56:58,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36001,1685188618137-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36001,1685188618137-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36001,1685188618137-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36001, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 11:56:58,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:56:58,849 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 11:56:58,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 11:56:58,851 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:56:58,852 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:56:58,854 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:58,855 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c empty. 2023-05-27 11:56:58,855 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:58,855 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 11:56:58,868 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 11:56:58,869 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8c17b907cf93c567a79b078a9826aa1c, NAME => 'hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp 2023-05-27 11:56:58,877 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:58,878 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8c17b907cf93c567a79b078a9826aa1c, disabling compactions & flushes 2023-05-27 11:56:58,878 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:58,878 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:58,878 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. after waiting 0 ms 2023-05-27 11:56:58,878 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:58,878 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:58,878 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8c17b907cf93c567a79b078a9826aa1c: 2023-05-27 11:56:58,881 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:56:58,882 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188618882"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188618882"}]},"ts":"1685188618882"} 2023-05-27 11:56:58,885 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:56:58,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:56:58,886 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188618886"}]},"ts":"1685188618886"} 2023-05-27 11:56:58,888 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 11:56:58,894 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8c17b907cf93c567a79b078a9826aa1c, ASSIGN}] 2023-05-27 11:56:58,896 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8c17b907cf93c567a79b078a9826aa1c, ASSIGN 2023-05-27 11:56:58,897 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8c17b907cf93c567a79b078a9826aa1c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44867,1685188618236; forceNewPlan=false, retain=false 2023-05-27 11:56:59,048 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8c17b907cf93c567a79b078a9826aa1c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:59,048 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188619048"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188619048"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188619048"}]},"ts":"1685188619048"} 2023-05-27 11:56:59,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8c17b907cf93c567a79b078a9826aa1c, server=jenkins-hbase4.apache.org,44867,1685188618236}] 2023-05-27 11:56:59,210 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:59,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8c17b907cf93c567a79b078a9826aa1c, NAME => 'hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:56:59,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:59,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,212 INFO [StoreOpener-8c17b907cf93c567a79b078a9826aa1c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,213 DEBUG [StoreOpener-8c17b907cf93c567a79b078a9826aa1c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c/info 2023-05-27 11:56:59,213 DEBUG [StoreOpener-8c17b907cf93c567a79b078a9826aa1c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c/info 2023-05-27 11:56:59,214 INFO [StoreOpener-8c17b907cf93c567a79b078a9826aa1c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8c17b907cf93c567a79b078a9826aa1c columnFamilyName info 2023-05-27 11:56:59,214 INFO [StoreOpener-8c17b907cf93c567a79b078a9826aa1c-1] regionserver.HStore(310): Store=8c17b907cf93c567a79b078a9826aa1c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:59,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:56:59,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/hbase/namespace/8c17b907cf93c567a79b078a9826aa1c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:56:59,222 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8c17b907cf93c567a79b078a9826aa1c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=865243, jitterRate=0.100213423371315}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:56:59,223 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8c17b907cf93c567a79b078a9826aa1c: 2023-05-27 11:56:59,224 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c., pid=6, masterSystemTime=1685188619205 2023-05-27 11:56:59,227 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:59,227 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:56:59,228 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8c17b907cf93c567a79b078a9826aa1c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:59,228 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188619228"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188619228"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188619228"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188619228"}]},"ts":"1685188619228"} 2023-05-27 11:56:59,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 11:56:59,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8c17b907cf93c567a79b078a9826aa1c, server=jenkins-hbase4.apache.org,44867,1685188618236 in 179 msec 2023-05-27 11:56:59,236 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 11:56:59,236 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8c17b907cf93c567a79b078a9826aa1c, ASSIGN in 339 msec 2023-05-27 11:56:59,237 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:56:59,237 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188619237"}]},"ts":"1685188619237"} 2023-05-27 11:56:59,239 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 11:56:59,242 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:56:59,244 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 394 msec 2023-05-27 11:56:59,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 11:56:59,252 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:56:59,252 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:59,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 11:56:59,266 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:56:59,270 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-27 11:56:59,278 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 11:56:59,286 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:56:59,290 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-27 11:56:59,303 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 11:56:59,305 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 11:56:59,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.044sec 2023-05-27 11:56:59,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 11:56:59,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 11:56:59,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 11:56:59,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36001,1685188618137-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 11:56:59,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36001,1685188618137-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 11:56:59,308 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 11:56:59,354 DEBUG [Listener at localhost/43037] zookeeper.ReadOnlyZKClient(139): Connect 0x65ab8d8a to 127.0.0.1:49196 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:56:59,361 DEBUG [Listener at localhost/43037] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68667cd3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:56:59,363 DEBUG [hconnection-0x629c0801-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:56:59,365 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54072, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:56:59,367 INFO [Listener at localhost/43037] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:56:59,367 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:59,371 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 11:56:59,371 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:56:59,371 INFO [Listener at localhost/43037] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 11:56:59,384 INFO [Listener at localhost/43037] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:56:59,384 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:59,384 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:59,384 INFO [Listener at localhost/43037] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:56:59,384 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:56:59,384 INFO [Listener at localhost/43037] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:56:59,385 INFO [Listener at localhost/43037] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:56:59,386 INFO [Listener at localhost/43037] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44713 2023-05-27 11:56:59,386 INFO [Listener at localhost/43037] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 11:56:59,387 DEBUG [Listener at localhost/43037] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 11:56:59,388 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:59,389 INFO [Listener at localhost/43037] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:56:59,390 INFO [Listener at localhost/43037] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44713 connecting to ZooKeeper ensemble=127.0.0.1:49196 2023-05-27 11:56:59,393 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:447130x0, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:56:59,394 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44713-0x1006c802a810005 connected 2023-05-27 11:56:59,394 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(162): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:56:59,395 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(162): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-27 11:56:59,396 DEBUG [Listener at localhost/43037] zookeeper.ZKUtil(164): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:56:59,396 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44713 2023-05-27 11:56:59,402 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44713 2023-05-27 11:56:59,402 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44713 2023-05-27 11:56:59,403 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44713 2023-05-27 11:56:59,403 DEBUG [Listener at localhost/43037] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44713 2023-05-27 11:56:59,405 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(951): ClusterId : a082232c-5006-4d32-8839-b18621c9e6bc 2023-05-27 11:56:59,405 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 11:56:59,408 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 11:56:59,409 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 11:56:59,410 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 11:56:59,411 DEBUG [RS:1;jenkins-hbase4:44713] zookeeper.ReadOnlyZKClient(139): Connect 0x021b88a0 to 127.0.0.1:49196 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:56:59,415 DEBUG [RS:1;jenkins-hbase4:44713] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cdc7693, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:56:59,415 DEBUG [RS:1;jenkins-hbase4:44713] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8e3e3d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:56:59,424 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44713 2023-05-27 11:56:59,425 INFO [RS:1;jenkins-hbase4:44713] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 11:56:59,425 INFO [RS:1;jenkins-hbase4:44713] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 11:56:59,425 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 11:56:59,425 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36001,1685188618137 with isa=jenkins-hbase4.apache.org/172.31.14.131:44713, startcode=1685188619383 2023-05-27 11:56:59,426 DEBUG [RS:1;jenkins-hbase4:44713] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 11:56:59,428 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42049, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 11:56:59,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,429 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99 2023-05-27 11:56:59,429 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35539 2023-05-27 11:56:59,429 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 11:56:59,431 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:56:59,431 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:56:59,431 DEBUG [RS:1;jenkins-hbase4:44713] zookeeper.ZKUtil(162): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,431 WARN [RS:1;jenkins-hbase4:44713] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:56:59,431 INFO [RS:1;jenkins-hbase4:44713] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:56:59,431 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44713,1685188619383] 2023-05-27 11:56:59,431 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,432 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:59,436 DEBUG [RS:1;jenkins-hbase4:44713] zookeeper.ZKUtil(162): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,436 DEBUG [RS:1;jenkins-hbase4:44713] zookeeper.ZKUtil(162): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:56:59,437 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 11:56:59,438 INFO [RS:1;jenkins-hbase4:44713] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 11:56:59,440 INFO [RS:1;jenkins-hbase4:44713] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 11:56:59,440 INFO [RS:1;jenkins-hbase4:44713] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:56:59,441 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:59,441 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 11:56:59,443 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,443 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,444 DEBUG [RS:1;jenkins-hbase4:44713] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:56:59,446 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:59,446 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:59,446 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:59,460 INFO [RS:1;jenkins-hbase4:44713] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 11:56:59,460 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44713,1685188619383-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:56:59,472 INFO [RS:1;jenkins-hbase4:44713] regionserver.Replication(203): jenkins-hbase4.apache.org,44713,1685188619383 started 2023-05-27 11:56:59,472 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44713,1685188619383, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44713, sessionid=0x1006c802a810005 2023-05-27 11:56:59,472 INFO [Listener at localhost/43037] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:44713,5,FailOnTimeoutGroup] 2023-05-27 11:56:59,472 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 11:56:59,473 INFO [Listener at localhost/43037] wal.TestLogRolling(323): Replication=2 2023-05-27 11:56:59,472 DEBUG [RS:1;jenkins-hbase4:44713] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,473 DEBUG [RS:1;jenkins-hbase4:44713] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44713,1685188619383' 2023-05-27 11:56:59,473 DEBUG [RS:1;jenkins-hbase4:44713] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:56:59,474 DEBUG [RS:1;jenkins-hbase4:44713] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:56:59,475 DEBUG [Listener at localhost/43037] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 11:56:59,475 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 11:56:59,476 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 11:56:59,476 DEBUG [RS:1;jenkins-hbase4:44713] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,477 DEBUG [RS:1;jenkins-hbase4:44713] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44713,1685188619383' 2023-05-27 11:56:59,477 DEBUG [RS:1;jenkins-hbase4:44713] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 11:56:59,477 DEBUG [RS:1;jenkins-hbase4:44713] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 11:56:59,478 DEBUG [RS:1;jenkins-hbase4:44713] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 11:56:59,478 INFO [RS:1;jenkins-hbase4:44713] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 11:56:59,478 INFO [RS:1;jenkins-hbase4:44713] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 11:56:59,479 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33560, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 11:56:59,481 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 11:56:59,481 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 11:56:59,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:56:59,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-27 11:56:59,486 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:56:59,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-27 11:56:59,487 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:56:59,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:56:59,491 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,491 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17 empty. 2023-05-27 11:56:59,492 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,492 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-27 11:56:59,507 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-27 11:56:59,508 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => c56c9e71058d682e03958d5fe97d4a17, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/.tmp 2023-05-27 11:56:59,521 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:59,521 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing c56c9e71058d682e03958d5fe97d4a17, disabling compactions & flushes 2023-05-27 11:56:59,521 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,521 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,521 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. after waiting 0 ms 2023-05-27 11:56:59,522 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,522 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,522 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for c56c9e71058d682e03958d5fe97d4a17: 2023-05-27 11:56:59,525 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:56:59,526 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685188619526"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188619526"}]},"ts":"1685188619526"} 2023-05-27 11:56:59,529 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:56:59,530 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:56:59,530 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188619530"}]},"ts":"1685188619530"} 2023-05-27 11:56:59,532 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-27 11:56:59,540 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-05-27 11:56:59,542 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-27 11:56:59,542 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-27 11:56:59,542 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-27 11:56:59,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=c56c9e71058d682e03958d5fe97d4a17, ASSIGN}] 2023-05-27 11:56:59,544 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=c56c9e71058d682e03958d5fe97d4a17, ASSIGN 2023-05-27 11:56:59,546 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=c56c9e71058d682e03958d5fe97d4a17, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44713,1685188619383; forceNewPlan=false, retain=false 2023-05-27 11:56:59,581 INFO [RS:1;jenkins-hbase4:44713] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44713%2C1685188619383, suffix=, logDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383, archiveDir=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/oldWALs, maxLogs=32 2023-05-27 11:56:59,592 INFO [RS:1;jenkins-hbase4:44713] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582 2023-05-27 11:56:59,592 DEBUG [RS:1;jenkins-hbase4:44713] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK], DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] 2023-05-27 11:56:59,698 INFO [jenkins-hbase4:36001] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-27 11:56:59,699 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c56c9e71058d682e03958d5fe97d4a17, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,699 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685188619699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188619699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188619699"}]},"ts":"1685188619699"} 2023-05-27 11:56:59,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c56c9e71058d682e03958d5fe97d4a17, server=jenkins-hbase4.apache.org,44713,1685188619383}] 2023-05-27 11:56:59,855 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,855 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 11:56:59,858 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 11:56:59,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c56c9e71058d682e03958d5fe97d4a17, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:56:59,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:56:59,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,865 INFO [StoreOpener-c56c9e71058d682e03958d5fe97d4a17-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,866 DEBUG [StoreOpener-c56c9e71058d682e03958d5fe97d4a17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info 2023-05-27 11:56:59,866 DEBUG [StoreOpener-c56c9e71058d682e03958d5fe97d4a17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info 2023-05-27 11:56:59,867 INFO [StoreOpener-c56c9e71058d682e03958d5fe97d4a17-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c56c9e71058d682e03958d5fe97d4a17 columnFamilyName info 2023-05-27 11:56:59,867 INFO [StoreOpener-c56c9e71058d682e03958d5fe97d4a17-1] regionserver.HStore(310): Store=c56c9e71058d682e03958d5fe97d4a17/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:56:59,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:56:59,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:56:59,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c56c9e71058d682e03958d5fe97d4a17; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=776517, jitterRate=-0.012608155608177185}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:56:59,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c56c9e71058d682e03958d5fe97d4a17: 2023-05-27 11:56:59,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17., pid=11, masterSystemTime=1685188619855 2023-05-27 11:56:59,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:56:59,881 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c56c9e71058d682e03958d5fe97d4a17, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:56:59,882 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685188619881"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188619881"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188619881"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188619881"}]},"ts":"1685188619881"} 2023-05-27 11:56:59,887 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 11:56:59,887 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c56c9e71058d682e03958d5fe97d4a17, server=jenkins-hbase4.apache.org,44713,1685188619383 in 183 msec 2023-05-27 11:56:59,890 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 11:56:59,890 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=c56c9e71058d682e03958d5fe97d4a17, ASSIGN in 345 msec 2023-05-27 11:56:59,891 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:56:59,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188619891"}]},"ts":"1685188619891"} 2023-05-27 11:56:59,893 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-27 11:56:59,896 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:56:59,898 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 414 msec 2023-05-27 11:57:02,193 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 11:57:04,494 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 11:57:04,543 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 11:57:05,438 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-27 11:57:09,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:57:09,489 INFO [Listener at localhost/43037] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-27 11:57:09,492 DEBUG [Listener at localhost/43037] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-27 11:57:09,492 DEBUG [Listener at localhost/43037] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:57:09,506 WARN [Listener at localhost/43037] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:57:09,508 WARN [Listener at localhost/43037] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:09,510 INFO [Listener at localhost/43037] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:09,514 INFO [Listener at localhost/43037] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_33035_datanode____.cvvisn/webapp 2023-05-27 11:57:09,604 INFO [Listener at localhost/43037] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33035 2023-05-27 11:57:09,616 WARN [Listener at localhost/33595] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:09,642 WARN [Listener at localhost/33595] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:57:09,644 WARN [Listener at localhost/33595] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:09,645 INFO [Listener at localhost/33595] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:09,650 INFO [Listener at localhost/33595] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_41117_datanode____.cp5vvj/webapp 2023-05-27 11:57:09,721 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f1b719f94c801a0: Processing first storage report for DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673 from datanode 7a815267-403a-463c-9f1a-331bb5743e7d 2023-05-27 11:57:09,721 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f1b719f94c801a0: from storage DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673 node DatanodeRegistration(127.0.0.1:35843, datanodeUuid=7a815267-403a-463c-9f1a-331bb5743e7d, infoPort=43325, infoSecurePort=0, ipcPort=33595, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:09,721 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f1b719f94c801a0: Processing first storage report for DS-e762fe6e-4435-4df0-b449-90eb724edc68 from datanode 7a815267-403a-463c-9f1a-331bb5743e7d 2023-05-27 11:57:09,721 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f1b719f94c801a0: from storage DS-e762fe6e-4435-4df0-b449-90eb724edc68 node DatanodeRegistration(127.0.0.1:35843, datanodeUuid=7a815267-403a-463c-9f1a-331bb5743e7d, infoPort=43325, infoSecurePort=0, ipcPort=33595, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:09,759 INFO [Listener at localhost/33595] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41117 2023-05-27 11:57:09,769 WARN [Listener at localhost/46865] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:09,786 WARN [Listener at localhost/46865] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:57:09,788 WARN [Listener at localhost/46865] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:09,789 INFO [Listener at localhost/46865] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:09,792 INFO [Listener at localhost/46865] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_44353_datanode____.2768eg/webapp 2023-05-27 11:57:09,863 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x257eb1183c133e5a: Processing first storage report for DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f from datanode 615332c0-eaa6-42e6-a19b-1bb8d2889cbb 2023-05-27 11:57:09,863 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x257eb1183c133e5a: from storage DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f node DatanodeRegistration(127.0.0.1:38803, datanodeUuid=615332c0-eaa6-42e6-a19b-1bb8d2889cbb, infoPort=44075, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:09,863 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x257eb1183c133e5a: Processing first storage report for DS-971725bc-fe5a-4d41-b1cd-065f9c9ea8bd from datanode 615332c0-eaa6-42e6-a19b-1bb8d2889cbb 2023-05-27 11:57:09,863 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x257eb1183c133e5a: from storage DS-971725bc-fe5a-4d41-b1cd-065f9c9ea8bd node DatanodeRegistration(127.0.0.1:38803, datanodeUuid=615332c0-eaa6-42e6-a19b-1bb8d2889cbb, infoPort=44075, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:09,893 INFO [Listener at localhost/46865] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44353 2023-05-27 11:57:09,901 WARN [Listener at localhost/43747] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:09,993 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa4e2154acd8a12cc: Processing first storage report for DS-997fb44f-efc6-4013-ab2b-30c9d171aaff from datanode 0df9c978-ca5d-45a4-9349-58999de3fd5e 2023-05-27 11:57:09,993 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa4e2154acd8a12cc: from storage DS-997fb44f-efc6-4013-ab2b-30c9d171aaff node DatanodeRegistration(127.0.0.1:42437, datanodeUuid=0df9c978-ca5d-45a4-9349-58999de3fd5e, infoPort=45519, infoSecurePort=0, ipcPort=43747, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:09,993 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa4e2154acd8a12cc: Processing first storage report for DS-39ecf214-1b9b-4bd2-b07c-26fb5443f44a from datanode 0df9c978-ca5d-45a4-9349-58999de3fd5e 2023-05-27 11:57:09,993 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa4e2154acd8a12cc: from storage DS-39ecf214-1b9b-4bd2-b07c-26fb5443f44a node DatanodeRegistration(127.0.0.1:42437, datanodeUuid=0df9c978-ca5d-45a4-9349-58999de3fd5e, infoPort=45519, infoSecurePort=0, ipcPort=43747, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:10,006 WARN [Listener at localhost/43747] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:57:10,008 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:10,008 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:10,008 WARN [DataStreamer for file /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582 block BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK], DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]) is bad. 2023-05-27 11:57:10,011 WARN [DataStreamer for file /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta.1685188618790.meta block BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK], DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]) is bad. 2023-05-27 11:57:10,013 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 11:57:10,013 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 11:57:10,017 WARN [PacketResponder: BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40121]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,020 INFO [Listener at localhost/43747] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:10,022 WARN [DataStreamer for file /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137/jenkins-hbase4.apache.org%2C36001%2C1685188618137.1685188618326 block BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK], DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]) is bad. 2023-05-27 11:57:10,022 WARN [PacketResponder: BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40121]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,017 WARN [DataStreamer for file /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.1685188618638 block BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK], DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]) is bad. 2023-05-27 11:57:10,027 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2002400429_17 at /127.0.0.1:37950 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37950 dst: /127.0.0.1:34917 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,023 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_953161286_17 at /127.0.0.1:37968 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37968 dst: /127.0.0.1:34917 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,031 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:38034 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38034 dst: /127.0.0.1:34917 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34917 remote=/127.0.0.1:38034]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,031 WARN [PacketResponder: BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34917]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,031 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_953161286_17 at /127.0.0.1:37978 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37978 dst: /127.0.0.1:34917 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34917 remote=/127.0.0.1:37978]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,034 WARN [PacketResponder: BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34917]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,039 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:51488 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51488 dst: /127.0.0.1:40121 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,040 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_953161286_17 at /127.0.0.1:51452 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51452 dst: /127.0.0.1:40121 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,097 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1958861295-172.31.14.131-1685188617397 (Datanode Uuid a538dac3-b48d-41b5-a6f1-059b3cadc615) service to localhost/127.0.0.1:35539 2023-05-27 11:57:10,097 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data3/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:10,098 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data4/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:10,123 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2002400429_17 at /127.0.0.1:51406 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51406 dst: /127.0.0.1:40121 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,124 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_953161286_17 at /127.0.0.1:51438 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51438 dst: /127.0.0.1:40121 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,126 WARN [Listener at localhost/43747] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:57:10,126 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:10,126 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:10,126 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:10,126 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:10,133 INFO [Listener at localhost/43747] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:10,236 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_953161286_17 at /127.0.0.1:58764 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58764 dst: /127.0.0.1:34917 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,237 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:57:10,237 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_953161286_17 at /127.0.0.1:58760 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58760 dst: /127.0.0.1:34917 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,237 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:58744 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58744 dst: /127.0.0.1:34917 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,237 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2002400429_17 at /127.0.0.1:58756 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34917:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58756 dst: /127.0.0.1:34917 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:10,238 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1958861295-172.31.14.131-1685188617397 (Datanode Uuid 05a04d9d-a993-4bbd-8463-b580992e8891) service to localhost/127.0.0.1:35539 2023-05-27 11:57:10,240 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data1/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:10,241 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data2/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:10,245 DEBUG [Listener at localhost/43747] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:57:10,248 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48314, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:57:10,249 WARN [RS:1;jenkins-hbase4:44713.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:10,250 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44713%2C1685188619383:(num 1685188619582) roll requested 2023-05-27 11:57:10,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:10,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:48314 deadline: 1685188640248, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-27 11:57:10,255 WARN [Thread-627] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741839_1019 2023-05-27 11:57:10,258 WARN [Thread-627] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK] 2023-05-27 11:57:10,268 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-27 11:57:10,268 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582 with entries=1, filesize=466 B; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 2023-05-27 11:57:10,269 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK], DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]] 2023-05-27 11:57:10,269 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582 is not closed yet, will try archiving it next time 2023-05-27 11:57:10,269 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:10,269 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:10,270 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582 to hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/oldWALs/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188619582 2023-05-27 11:57:22,306 INFO [Listener at localhost/43747] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 2023-05-27 11:57:22,306 WARN [Listener at localhost/43747] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:57:22,308 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:57:22,308 WARN [DataStreamer for file /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 block BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK], DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK]) is bad. 2023-05-27 11:57:22,311 INFO [Listener at localhost/43747] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:22,313 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:50600 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:42437:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50600 dst: /127.0.0.1:42437 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:42437 remote=/127.0.0.1:50600]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:22,313 WARN [PacketResponder: BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42437]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:22,314 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:49896 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:35843:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49896 dst: /127.0.0.1:35843 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:22,416 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:57:22,416 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1958861295-172.31.14.131-1685188617397 (Datanode Uuid 7a815267-403a-463c-9f1a-331bb5743e7d) service to localhost/127.0.0.1:35539 2023-05-27 11:57:22,417 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data5/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:22,417 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data6/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:22,422 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]] 2023-05-27 11:57:22,422 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]] 2023-05-27 11:57:22,422 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44713%2C1685188619383:(num 1685188630250) roll requested 2023-05-27 11:57:22,425 WARN [Thread-637] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741841_1022 2023-05-27 11:57:22,426 WARN [Thread-637] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK] 2023-05-27 11:57:22,427 WARN [Thread-637] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741842_1023 2023-05-27 11:57:22,428 WARN [Thread-637] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK] 2023-05-27 11:57:22,432 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37136 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741843_1024]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current]'}, localName='127.0.0.1:38803', datanodeUuid='615332c0-eaa6-42e6-a19b-1bb8d2889cbb', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741843_1024 to mirror 127.0.0.1:40121: java.net.ConnectException: Connection refused 2023-05-27 11:57:22,432 WARN [Thread-637] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741843_1024 2023-05-27 11:57:22,432 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37136 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37136 dst: /127.0.0.1:38803 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:22,432 WARN [Thread-637] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK] 2023-05-27 11:57:22,440 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188642422 2023-05-27 11:57:22,440 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK], DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]] 2023-05-27 11:57:22,440 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 is not closed yet, will try archiving it next time 2023-05-27 11:57:25,001 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@543d9214] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:42437, datanodeUuid=0df9c978-ca5d-45a4-9349-58999de3fd5e, infoPort=45519, infoSecurePort=0, ipcPort=43747, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741840_1021 to 127.0.0.1:34917 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,426 WARN [Listener at localhost/43747] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:57:26,428 WARN [ResponseProcessor for block BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025 java.io.IOException: Bad response ERROR for BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025 from datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 11:57:26,429 WARN [DataStreamer for file /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188642422 block BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025] hdfs.DataStreamer(1548): Error Recovery for BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK], DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK]) is bad. 2023-05-27 11:57:26,429 WARN [PacketResponder: BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42437]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,431 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37150 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37150 dst: /127.0.0.1:38803 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,433 INFO [Listener at localhost/43747] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:26,537 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37824 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1025]] datanode.DataXceiver(323): 127.0.0.1:42437:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37824 dst: /127.0.0.1:42437 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,539 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:57:26,539 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1958861295-172.31.14.131-1685188617397 (Datanode Uuid 0df9c978-ca5d-45a4-9349-58999de3fd5e) service to localhost/127.0.0.1:35539 2023-05-27 11:57:26,540 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data9/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:26,540 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data10/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:26,545 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:26,545 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:26,545 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44713%2C1685188619383:(num 1685188642422) roll requested 2023-05-27 11:57:26,548 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741845_1027 2023-05-27 11:57:26,549 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK] 2023-05-27 11:57:26,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] regionserver.HRegion(9158): Flush requested on c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:57:26,551 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c56c9e71058d682e03958d5fe97d4a17 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:57:26,551 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741846_1028 2023-05-27 11:57:26,552 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:26,554 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741847_1029 2023-05-27 11:57:26,554 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK] 2023-05-27 11:57:26,557 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37160 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741848_1030]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current]'}, localName='127.0.0.1:38803', datanodeUuid='615332c0-eaa6-42e6-a19b-1bb8d2889cbb', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741848_1030 to mirror 127.0.0.1:34917: java.net.ConnectException: Connection refused 2023-05-27 11:57:26,557 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741848_1030 2023-05-27 11:57:26,558 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37160 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741848_1030]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37160 dst: /127.0.0.1:38803 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,558 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK] 2023-05-27 11:57:26,559 WARN [IPC Server handler 2 on default port 35539] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-27 11:57:26,559 WARN [IPC Server handler 2 on default port 35539] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-27 11:57:26,559 WARN [IPC Server handler 2 on default port 35539] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-27 11:57:26,560 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37174 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741849_1031]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current]'}, localName='127.0.0.1:38803', datanodeUuid='615332c0-eaa6-42e6-a19b-1bb8d2889cbb', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741849_1031 to mirror 127.0.0.1:35843: java.net.ConnectException: Connection refused 2023-05-27 11:57:26,560 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741849_1031 2023-05-27 11:57:26,560 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37174 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741849_1031]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37174 dst: /127.0.0.1:38803 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,560 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK] 2023-05-27 11:57:26,565 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37182 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current]'}, localName='127.0.0.1:38803', datanodeUuid='615332c0-eaa6-42e6-a19b-1bb8d2889cbb', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741851_1033 to mirror 127.0.0.1:42437: java.net.ConnectException: Connection refused 2023-05-27 11:57:26,565 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37182 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37182 dst: /127.0.0.1:38803 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,565 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741851_1033 2023-05-27 11:57:26,566 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:26,567 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188642422 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646545 2023-05-27 11:57:26,567 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:26,567 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188642422 is not closed yet, will try archiving it next time 2023-05-27 11:57:26,568 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37190 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741852_1034]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current]'}, localName='127.0.0.1:38803', datanodeUuid='615332c0-eaa6-42e6-a19b-1bb8d2889cbb', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741852_1034 to mirror 127.0.0.1:40121: java.net.ConnectException: Connection refused 2023-05-27 11:57:26,568 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741852_1034 2023-05-27 11:57:26,568 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:37190 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741852_1034]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37190 dst: /127.0.0.1:38803 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:26,569 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK] 2023-05-27 11:57:26,570 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741853_1035 2023-05-27 11:57:26,570 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK] 2023-05-27 11:57:26,571 WARN [IPC Server handler 0 on default port 35539] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-27 11:57:26,571 WARN [IPC Server handler 0 on default port 35539] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-27 11:57:26,571 WARN [IPC Server handler 0 on default port 35539] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-27 11:57:26,765 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:26,765 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:26,765 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44713%2C1685188619383:(num 1685188646545) roll requested 2023-05-27 11:57:26,768 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741855_1037 2023-05-27 11:57:26,769 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK] 2023-05-27 11:57:26,770 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741856_1038 2023-05-27 11:57:26,770 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40121,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK] 2023-05-27 11:57:26,771 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741857_1039 2023-05-27 11:57:26,772 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK] 2023-05-27 11:57:26,773 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741858_1040 2023-05-27 11:57:26,773 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:26,774 WARN [IPC Server handler 3 on default port 35539] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-27 11:57:26,774 WARN [IPC Server handler 3 on default port 35539] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-27 11:57:26,774 WARN [IPC Server handler 3 on default port 35539] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-27 11:57:26,778 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646545 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646765 2023-05-27 11:57:26,778 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:26,778 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188642422 is not closed yet, will try archiving it next time 2023-05-27 11:57:26,778 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646545 is not closed yet, will try archiving it next time 2023-05-27 11:57:26,968 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-27 11:57:26,976 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646545 is not closed yet, will try archiving it next time 2023-05-27 11:57:26,976 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/.tmp/info/d60d404f1fbd41da91b877aa083d9edd 2023-05-27 11:57:26,985 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/.tmp/info/d60d404f1fbd41da91b877aa083d9edd as hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/d60d404f1fbd41da91b877aa083d9edd 2023-05-27 11:57:26,991 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/d60d404f1fbd41da91b877aa083d9edd, entries=5, sequenceid=12, filesize=10.0 K 2023-05-27 11:57:26,992 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for c56c9e71058d682e03958d5fe97d4a17 in 441ms, sequenceid=12, compaction requested=false 2023-05-27 11:57:26,992 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c56c9e71058d682e03958d5fe97d4a17: 2023-05-27 11:57:27,174 WARN [Listener at localhost/43747] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:57:27,176 WARN [Listener at localhost/43747] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:27,177 INFO [Listener at localhost/43747] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:27,181 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 to hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/oldWALs/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188630250 2023-05-27 11:57:27,182 INFO [Listener at localhost/43747] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/java.io.tmpdir/Jetty_localhost_41745_datanode____.kjnxha/webapp 2023-05-27 11:57:27,272 INFO [Listener at localhost/43747] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41745 2023-05-27 11:57:27,280 WARN [Listener at localhost/43651] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:27,372 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4581d32e3fb0f92: Processing first storage report for DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8 from datanode a538dac3-b48d-41b5-a6f1-059b3cadc615 2023-05-27 11:57:27,372 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4581d32e3fb0f92: from storage DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8 node DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:27,373 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4581d32e3fb0f92: Processing first storage report for DS-23d4ac92-3d2c-4f84-bbeb-d09d0b242b39 from datanode a538dac3-b48d-41b5-a6f1-059b3cadc615 2023-05-27 11:57:27,373 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4581d32e3fb0f92: from storage DS-23d4ac92-3d2c-4f84-bbeb-d09d0b242b39 node DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:27,864 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@141967aa] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38803, datanodeUuid=615332c0-eaa6-42e6-a19b-1bb8d2889cbb, infoPort=44075, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741844_1026 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:27,864 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@ed7466f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38803, datanodeUuid=615332c0-eaa6-42e6-a19b-1bb8d2889cbb, infoPort=44075, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741854_1036 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:28,410 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:28,410 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36001%2C1685188618137:(num 1685188618326) roll requested 2023-05-27 11:57:28,414 WARN [Thread-700] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741860_1042 2023-05-27 11:57:28,415 WARN [Thread-700] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:28,415 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:28,416 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:28,417 WARN [Thread-700] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741861_1043 2023-05-27 11:57:28,417 WARN [Thread-700] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK] 2023-05-27 11:57:28,418 WARN [Thread-700] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741862_1044 2023-05-27 11:57:28,419 WARN [Thread-700] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35843,DS-4bfe0c7c-b124-441b-958e-a1e8d2d98673,DISK] 2023-05-27 11:57:28,424 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-27 11:57:28,424 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137/jenkins-hbase4.apache.org%2C36001%2C1685188618137.1685188618326 with entries=88, filesize=43.70 KB; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137/jenkins-hbase4.apache.org%2C36001%2C1685188618137.1685188648411 2023-05-27 11:57:28,424 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK], DatanodeInfoWithStorage[127.0.0.1:43567,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]] 2023-05-27 11:57:28,424 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137/jenkins-hbase4.apache.org%2C36001%2C1685188618137.1685188618326 is not closed yet, will try archiving it next time 2023-05-27 11:57:28,424 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:28,425 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137/jenkins-hbase4.apache.org%2C36001%2C1685188618137.1685188618326; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:28,864 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3f6475c2] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38803, datanodeUuid=615332c0-eaa6-42e6-a19b-1bb8d2889cbb, infoPort=44075, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741850_1032 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:40,374 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1d9f03df] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741836_1012 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:40,374 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7af5cba9] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741834_1010 to 127.0.0.1:35843 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:41,374 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@36c34adb] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741830_1006 to 127.0.0.1:35843 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:41,374 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@743970e2] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741828_1004 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:43,374 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@11dd0441] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741827_1003 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:46,017 WARN [Thread-715] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741864_1046 2023-05-27 11:57:46,018 WARN [Thread-715] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:46,026 INFO [Listener at localhost/43651] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646765 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188666013 2023-05-27 11:57:46,026 DEBUG [Listener at localhost/43651] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43567,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK], DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:46,026 DEBUG [Listener at localhost/43651] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383/jenkins-hbase4.apache.org%2C44713%2C1685188619383.1685188646765 is not closed yet, will try archiving it next time 2023-05-27 11:57:46,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] regionserver.HRegion(9158): Flush requested on c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:57:46,031 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c56c9e71058d682e03958d5fe97d4a17 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-27 11:57:46,031 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-27 11:57:46,038 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:46148 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741866_1048]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current]'}, localName='127.0.0.1:38803', datanodeUuid='615332c0-eaa6-42e6-a19b-1bb8d2889cbb', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741866_1048 to mirror 127.0.0.1:42437: java.net.ConnectException: Connection refused 2023-05-27 11:57:46,038 WARN [Thread-722] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741866_1048 2023-05-27 11:57:46,038 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_484206796_17 at /127.0.0.1:46148 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741866_1048]] datanode.DataXceiver(323): 127.0.0.1:38803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46148 dst: /127.0.0.1:38803 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:46,039 WARN [Thread-722] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:46,048 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 11:57:46,048 INFO [Listener at localhost/43651] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 11:57:46,048 DEBUG [Listener at localhost/43651] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65ab8d8a to 127.0.0.1:49196 2023-05-27 11:57:46,049 DEBUG [Listener at localhost/43651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,049 DEBUG [Listener at localhost/43651] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 11:57:46,049 DEBUG [Listener at localhost/43651] util.JVMClusterUtil(257): Found active master hash=657169290, stopped=false 2023-05-27 11:57:46,049 INFO [Listener at localhost/43651] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:57:46,051 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/.tmp/info/6b5981e72e084e2ba9cf87a241756229 2023-05-27 11:57:46,051 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:57:46,051 INFO [Listener at localhost/43651] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 11:57:46,051 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:57:46,051 DEBUG [Listener at localhost/43651] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f452336 to 127.0.0.1:49196 2023-05-27 11:57:46,051 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:46,052 DEBUG [Listener at localhost/43651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,052 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:57:46,052 INFO [Listener at localhost/43651] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44867,1685188618236' ***** 2023-05-27 11:57:46,052 INFO [Listener at localhost/43651] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 11:57:46,051 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:57:46,052 INFO [Listener at localhost/43651] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44713,1685188619383' ***** 2023-05-27 11:57:46,052 INFO [RS:0;jenkins-hbase4:44867] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 11:57:46,053 INFO [RS:0;jenkins-hbase4:44867] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 11:57:46,052 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:57:46,053 INFO [RS:0;jenkins-hbase4:44867] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 11:57:46,053 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 11:57:46,053 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(3303): Received CLOSE for 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:57:46,052 INFO [Listener at localhost/43651] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 11:57:46,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:57:46,054 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:57:46,054 INFO [RS:1;jenkins-hbase4:44713] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 11:57:46,054 DEBUG [RS:0;jenkins-hbase4:44867] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6688172e to 127.0.0.1:49196 2023-05-27 11:57:46,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8c17b907cf93c567a79b078a9826aa1c, disabling compactions & flushes 2023-05-27 11:57:46,054 DEBUG [RS:0;jenkins-hbase4:44867] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,054 INFO [RS:0;jenkins-hbase4:44867] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 11:57:46,055 INFO [RS:0;jenkins-hbase4:44867] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 11:57:46,055 INFO [RS:0;jenkins-hbase4:44867] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 11:57:46,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,055 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 11:57:46,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. after waiting 0 ms 2023-05-27 11:57:46,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8c17b907cf93c567a79b078a9826aa1c 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 11:57:46,055 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-27 11:57:46,055 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1478): Online Regions={8c17b907cf93c567a79b078a9826aa1c=hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c., 1588230740=hbase:meta,,1.1588230740} 2023-05-27 11:57:46,055 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:57:46,055 WARN [RS:0;jenkins-hbase4:44867.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,055 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:57:46,056 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44867%2C1685188618236:(num 1685188618638) roll requested 2023-05-27 11:57:46,055 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1504): Waiting on 1588230740, 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:57:46,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8c17b907cf93c567a79b078a9826aa1c: 2023-05-27 11:57:46,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:57:46,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:57:46,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:57:46,056 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-05-27 11:57:46,057 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,44867,1685188618236: Unrecoverable exception while closing hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,057 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,057 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-27 11:57:46,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:57:46,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 11:57:46,064 WARN [Thread-731] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741868_1050 2023-05-27 11:57:46,064 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/.tmp/info/6b5981e72e084e2ba9cf87a241756229 as hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/6b5981e72e084e2ba9cf87a241756229 2023-05-27 11:57:46,064 WARN [Thread-731] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:46,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-27 11:57:46,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-27 11:57:46,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-27 11:57:46,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-27 11:57:46,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 993001472, "init": 513802240, "max": 2051014656, "used": 358065624 }, "NonHeapMemoryUsage": { "committed": 133259264, "init": 2555904, "max": -1, "used": 130771272 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-27 11:57:46,073 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36001] master.MasterRpcServices(609): jenkins-hbase4.apache.org,44867,1685188618236 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,44867,1685188618236: Unrecoverable exception while closing hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,073 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/6b5981e72e084e2ba9cf87a241756229, entries=8, sequenceid=25, filesize=13.2 K 2023-05-27 11:57:46,075 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for c56c9e71058d682e03958d5fe97d4a17 in 45ms, sequenceid=25, compaction requested=false 2023-05-27 11:57:46,075 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c56c9e71058d682e03958d5fe97d4a17: 2023-05-27 11:57:46,076 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-27 11:57:46,076 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:57:46,076 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/6b5981e72e084e2ba9cf87a241756229 because midkey is the same as first or last row 2023-05-27 11:57:46,076 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 11:57:46,076 INFO [RS:1;jenkins-hbase4:44713] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 11:57:46,076 INFO [RS:1;jenkins-hbase4:44713] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 11:57:46,076 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-27 11:57:46,076 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(3303): Received CLOSE for c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:57:46,076 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.1685188618638 with entries=3, filesize=600 B; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.1685188666056 2023-05-27 11:57:46,078 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43567,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK], DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK]] 2023-05-27 11:57:46,078 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.1685188618638 is not closed yet, will try archiving it next time 2023-05-27 11:57:46,079 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:57:46,079 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,079 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta:.meta(num 1685188618790) roll requested 2023-05-27 11:57:46,079 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.1685188618638; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,079 DEBUG [RS:1;jenkins-hbase4:44713] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x021b88a0 to 127.0.0.1:49196 2023-05-27 11:57:46,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c56c9e71058d682e03958d5fe97d4a17, disabling compactions & flushes 2023-05-27 11:57:46,079 DEBUG [RS:1;jenkins-hbase4:44713] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:57:46,079 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-27 11:57:46,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:57:46,079 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1478): Online Regions={c56c9e71058d682e03958d5fe97d4a17=TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17.} 2023-05-27 11:57:46,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. after waiting 0 ms 2023-05-27 11:57:46,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:57:46,080 DEBUG [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1504): Waiting on c56c9e71058d682e03958d5fe97d4a17 2023-05-27 11:57:46,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c56c9e71058d682e03958d5fe97d4a17 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-27 11:57:46,083 WARN [Thread-739] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741870_1052 2023-05-27 11:57:46,084 WARN [Thread-739] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:46,095 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-27 11:57:46,096 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta.1685188618790.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta.1685188666079.meta 2023-05-27 11:57:46,096 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38803,DS-fe682c3a-2005-4d0f-bcb6-8d8c1f68272f,DISK], DatanodeInfoWithStorage[127.0.0.1:43567,DS-c5f9da89-57f8-43b2-a4c3-6960bfcff6b8,DISK]] 2023-05-27 11:57:46,097 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,097 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta.1685188618790.meta is not closed yet, will try archiving it next time 2023-05-27 11:57:46,097 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236/jenkins-hbase4.apache.org%2C44867%2C1685188618236.meta.1685188618790.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34917,DS-becc151e-14aa-4092-a53a-0ff7c9b1e275,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:57:46,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/.tmp/info/a5795b3d836942b3867e1e7161f7033a 2023-05-27 11:57:46,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/.tmp/info/a5795b3d836942b3867e1e7161f7033a as hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/a5795b3d836942b3867e1e7161f7033a 2023-05-27 11:57:46,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/info/a5795b3d836942b3867e1e7161f7033a, entries=9, sequenceid=37, filesize=14.2 K 2023-05-27 11:57:46,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for c56c9e71058d682e03958d5fe97d4a17 in 32ms, sequenceid=37, compaction requested=true 2023-05-27 11:57:46,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/data/default/TestLogRolling-testLogRollOnDatanodeDeath/c56c9e71058d682e03958d5fe97d4a17/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-05-27 11:57:46,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:57:46,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c56c9e71058d682e03958d5fe97d4a17: 2023-05-27 11:57:46,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685188619481.c56c9e71058d682e03958d5fe97d4a17. 2023-05-27 11:57:46,256 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(3303): Received CLOSE for 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:57:46,256 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 11:57:46,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8c17b907cf93c567a79b078a9826aa1c, disabling compactions & flushes 2023-05-27 11:57:46,256 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:57:46,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,256 DEBUG [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1504): Waiting on 1588230740, 8c17b907cf93c567a79b078a9826aa1c 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,256 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. after waiting 0 ms 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8c17b907cf93c567a79b078a9826aa1c: 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685188618848.8c17b907cf93c567a79b078a9826aa1c. 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:57:46,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 11:57:46,280 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44713,1685188619383; all regions closed. 2023-05-27 11:57:46,280 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:57:46,290 DEBUG [RS:1;jenkins-hbase4:44713] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/oldWALs 2023-05-27 11:57:46,290 INFO [RS:1;jenkins-hbase4:44713] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C44713%2C1685188619383:(num 1685188666013) 2023-05-27 11:57:46,290 DEBUG [RS:1;jenkins-hbase4:44713] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,290 INFO [RS:1;jenkins-hbase4:44713] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:57:46,290 INFO [RS:1;jenkins-hbase4:44713] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 11:57:46,290 INFO [RS:1;jenkins-hbase4:44713] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 11:57:46,290 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:57:46,290 INFO [RS:1;jenkins-hbase4:44713] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 11:57:46,290 INFO [RS:1;jenkins-hbase4:44713] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 11:57:46,291 INFO [RS:1;jenkins-hbase4:44713] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44713 2023-05-27 11:57:46,295 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:57:46,295 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:57:46,295 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:57:46,295 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44713,1685188619383 2023-05-27 11:57:46,296 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:57:46,297 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44713,1685188619383] 2023-05-27 11:57:46,297 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44713,1685188619383; numProcessing=1 2023-05-27 11:57:46,300 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44713,1685188619383 already deleted, retry=false 2023-05-27 11:57:46,300 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44713,1685188619383 expired; onlineServers=1 2023-05-27 11:57:46,375 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@699832ad] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43567, datanodeUuid=a538dac3-b48d-41b5-a6f1-059b3cadc615, infoPort=42705, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=871993280;c=1685188617397):Failed to transfer BP-1958861295-172.31.14.131-1685188617397:blk_1073741826_1002 to 127.0.0.1:42437 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:46,457 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-27 11:57:46,457 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44867,1685188618236; all regions closed. 2023-05-27 11:57:46,457 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:57:46,462 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/WALs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:57:46,467 DEBUG [RS:0;jenkins-hbase4:44867] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,467 INFO [RS:0;jenkins-hbase4:44867] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:57:46,467 INFO [RS:0;jenkins-hbase4:44867] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 11:57:46,467 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:57:46,468 INFO [RS:0;jenkins-hbase4:44867] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44867 2023-05-27 11:57:46,470 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44867,1685188618236 2023-05-27 11:57:46,470 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:57:46,471 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44867,1685188618236] 2023-05-27 11:57:46,471 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44867,1685188618236; numProcessing=2 2023-05-27 11:57:46,472 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44867,1685188618236 already deleted, retry=false 2023-05-27 11:57:46,472 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44867,1685188618236 expired; onlineServers=0 2023-05-27 11:57:46,472 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36001,1685188618137' ***** 2023-05-27 11:57:46,472 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 11:57:46,473 DEBUG [M:0;jenkins-hbase4:36001] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23141bb7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:57:46,473 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:57:46,473 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36001,1685188618137; all regions closed. 2023-05-27 11:57:46,473 DEBUG [M:0;jenkins-hbase4:36001] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:57:46,473 DEBUG [M:0;jenkins-hbase4:36001] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 11:57:46,473 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 11:57:46,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188618413] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188618413,5,FailOnTimeoutGroup] 2023-05-27 11:57:46,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188618413] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188618413,5,FailOnTimeoutGroup] 2023-05-27 11:57:46,473 DEBUG [M:0;jenkins-hbase4:36001] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 11:57:46,474 INFO [M:0;jenkins-hbase4:36001] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 11:57:46,474 INFO [M:0;jenkins-hbase4:36001] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 11:57:46,474 INFO [M:0;jenkins-hbase4:36001] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 11:57:46,474 DEBUG [M:0;jenkins-hbase4:36001] master.HMaster(1512): Stopping service threads 2023-05-27 11:57:46,474 INFO [M:0;jenkins-hbase4:36001] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 11:57:46,475 ERROR [M:0;jenkins-hbase4:36001] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 11:57:46,475 INFO [M:0;jenkins-hbase4:36001] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 11:57:46,475 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 11:57:46,476 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 11:57:46,476 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:46,476 DEBUG [M:0;jenkins-hbase4:36001] zookeeper.ZKUtil(398): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 11:57:46,476 WARN [M:0;jenkins-hbase4:36001] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 11:57:46,476 INFO [M:0;jenkins-hbase4:36001] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 11:57:46,476 INFO [M:0;jenkins-hbase4:36001] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 11:57:46,477 DEBUG [M:0;jenkins-hbase4:36001] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:57:46,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:57:46,477 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:46,477 DEBUG [M:0;jenkins-hbase4:36001] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:46,477 DEBUG [M:0;jenkins-hbase4:36001] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:57:46,477 DEBUG [M:0;jenkins-hbase4:36001] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:46,477 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.07 KB heapSize=45.73 KB 2023-05-27 11:57:46,485 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2002400429_17 at /127.0.0.1:58052 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741873_1055]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data4/current]'}, localName='127.0.0.1:43567', datanodeUuid='a538dac3-b48d-41b5-a6f1-059b3cadc615', xmitsInProgress=0}:Exception transfering block BP-1958861295-172.31.14.131-1685188617397:blk_1073741873_1055 to mirror 127.0.0.1:42437: java.net.ConnectException: Connection refused 2023-05-27 11:57:46,485 WARN [Thread-756] hdfs.DataStreamer(1658): Abandoning BP-1958861295-172.31.14.131-1685188617397:blk_1073741873_1055 2023-05-27 11:57:46,485 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2002400429_17 at /127.0.0.1:58052 [Receiving block BP-1958861295-172.31.14.131-1685188617397:blk_1073741873_1055]] datanode.DataXceiver(323): 127.0.0.1:43567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58052 dst: /127.0.0.1:43567 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:57:46,486 WARN [Thread-756] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42437,DS-997fb44f-efc6-4013-ab2b-30c9d171aaff,DISK] 2023-05-27 11:57:46,491 INFO [M:0;jenkins-hbase4:36001] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.07 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ee88a363e5854ea6a37660b5b7c44971 2023-05-27 11:57:46,497 DEBUG [M:0;jenkins-hbase4:36001] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ee88a363e5854ea6a37660b5b7c44971 as hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ee88a363e5854ea6a37660b5b7c44971 2023-05-27 11:57:46,502 INFO [M:0;jenkins-hbase4:36001] regionserver.HStore(1080): Added hdfs://localhost:35539/user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ee88a363e5854ea6a37660b5b7c44971, entries=11, sequenceid=92, filesize=7.0 K 2023-05-27 11:57:46,504 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegion(2948): Finished flush of dataSize ~38.07 KB/38985, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=92, compaction requested=false 2023-05-27 11:57:46,505 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:46,505 DEBUG [M:0;jenkins-hbase4:36001] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:57:46,505 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/360ce7af-036d-6753-189c-a5fb4136ff99/MasterData/WALs/jenkins-hbase4.apache.org,36001,1685188618137 2023-05-27 11:57:46,509 INFO [M:0;jenkins-hbase4:36001] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 11:57:46,509 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:57:46,509 INFO [M:0;jenkins-hbase4:36001] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36001 2023-05-27 11:57:46,511 DEBUG [M:0;jenkins-hbase4:36001] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36001,1685188618137 already deleted, retry=false 2023-05-27 11:57:46,544 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:57:46,551 INFO [RS:1;jenkins-hbase4:44713] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44713,1685188619383; zookeeper connection closed. 2023-05-27 11:57:46,551 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:57:46,551 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44713-0x1006c802a810005, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:57:46,552 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3aa08dce] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3aa08dce 2023-05-27 11:57:46,651 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:57:46,651 INFO [M:0;jenkins-hbase4:36001] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36001,1685188618137; zookeeper connection closed. 2023-05-27 11:57:46,651 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): master:36001-0x1006c802a810000, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:57:46,751 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:57:46,751 INFO [RS:0;jenkins-hbase4:44867] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44867,1685188618236; zookeeper connection closed. 2023-05-27 11:57:46,752 DEBUG [Listener at localhost/43037-EventThread] zookeeper.ZKWatcher(600): regionserver:44867-0x1006c802a810001, quorum=127.0.0.1:49196, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:57:46,752 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@378dbf1e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@378dbf1e 2023-05-27 11:57:46,753 INFO [Listener at localhost/43651] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-27 11:57:46,753 WARN [Listener at localhost/43651] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:57:46,757 INFO [Listener at localhost/43651] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:46,860 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:57:46,860 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1958861295-172.31.14.131-1685188617397 (Datanode Uuid a538dac3-b48d-41b5-a6f1-059b3cadc615) service to localhost/127.0.0.1:35539 2023-05-27 11:57:46,860 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data3/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:46,861 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data4/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:46,862 WARN [Listener at localhost/43651] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:57:46,863 WARN [BP-1958861295-172.31.14.131-1685188617397 heartbeating to localhost/127.0.0.1:35539] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1958861295-172.31.14.131-1685188617397 (Datanode Uuid 615332c0-eaa6-42e6-a19b-1bb8d2889cbb) service to localhost/127.0.0.1:35539 2023-05-27 11:57:46,864 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data7/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:46,866 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/cluster_e3fb7d19-5253-488e-c881-c06518a2b7e9/dfs/data/data8/current/BP-1958861295-172.31.14.131-1685188617397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:57:46,866 INFO [Listener at localhost/43651] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:46,981 INFO [Listener at localhost/43651] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:57:47,104 INFO [Listener at localhost/43651] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 11:57:47,134 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 11:57:47,146 INFO [Listener at localhost/43651] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 52) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:35539 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:35539 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:35539 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:35539 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43651 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:35539 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:35539 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35539 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=472 (was 444) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=86 (was 113), ProcessCount=169 (was 169), AvailableMemoryMB=4023 (was 4575) 2023-05-27 11:57:47,155 INFO [Listener at localhost/43651] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=472, MaxFileDescriptor=60000, SystemLoadAverage=86, ProcessCount=169, AvailableMemoryMB=4022 2023-05-27 11:57:47,155 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 11:57:47,155 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/hadoop.log.dir so I do NOT create it in target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed 2023-05-27 11:57:47,155 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b1c50085-a102-9218-001a-9e1036712df8/hadoop.tmp.dir so I do NOT create it in target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43, deleteOnExit=true 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/test.cache.data in system properties and HBase conf 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/hadoop.log.dir in system properties and HBase conf 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 11:57:47,156 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 11:57:47,156 DEBUG [Listener at localhost/43651] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:57:47,157 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:57:47,158 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 11:57:47,158 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/nfs.dump.dir in system properties and HBase conf 2023-05-27 11:57:47,158 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir in system properties and HBase conf 2023-05-27 11:57:47,158 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:57:47,158 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 11:57:47,158 INFO [Listener at localhost/43651] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 11:57:47,159 WARN [Listener at localhost/43651] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:57:47,162 WARN [Listener at localhost/43651] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:57:47,162 WARN [Listener at localhost/43651] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:57:47,204 WARN [Listener at localhost/43651] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:47,207 INFO [Listener at localhost/43651] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:47,213 INFO [Listener at localhost/43651] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_43621_hdfs____.loir7k/webapp 2023-05-27 11:57:47,306 INFO [Listener at localhost/43651] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43621 2023-05-27 11:57:47,307 WARN [Listener at localhost/43651] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:57:47,310 WARN [Listener at localhost/43651] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:57:47,310 WARN [Listener at localhost/43651] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:57:47,354 WARN [Listener at localhost/38451] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:47,365 WARN [Listener at localhost/38451] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:57:47,369 WARN [Listener at localhost/38451] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:47,370 INFO [Listener at localhost/38451] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:47,376 INFO [Listener at localhost/38451] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_39819_datanode____.8u51b3/webapp 2023-05-27 11:57:47,448 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:57:47,490 INFO [Listener at localhost/38451] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39819 2023-05-27 11:57:47,496 WARN [Listener at localhost/42753] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:47,518 WARN [Listener at localhost/42753] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:57:47,520 WARN [Listener at localhost/42753] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:57:47,521 INFO [Listener at localhost/42753] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:57:47,532 INFO [Listener at localhost/42753] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_41823_datanode____ut96f7/webapp 2023-05-27 11:57:47,627 INFO [Listener at localhost/42753] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41823 2023-05-27 11:57:47,630 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa62e66c64d4dcfe5: Processing first storage report for DS-63cd0c31-927a-466e-974e-9bdb4955e2c0 from datanode a2049cfd-6869-4c47-8ced-371d69b74c63 2023-05-27 11:57:47,630 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa62e66c64d4dcfe5: from storage DS-63cd0c31-927a-466e-974e-9bdb4955e2c0 node DatanodeRegistration(127.0.0.1:40605, datanodeUuid=a2049cfd-6869-4c47-8ced-371d69b74c63, infoPort=33427, infoSecurePort=0, ipcPort=42753, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:47,630 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa62e66c64d4dcfe5: Processing first storage report for DS-91e2e317-3cb9-4494-ab50-d7217a624b64 from datanode a2049cfd-6869-4c47-8ced-371d69b74c63 2023-05-27 11:57:47,630 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa62e66c64d4dcfe5: from storage DS-91e2e317-3cb9-4494-ab50-d7217a624b64 node DatanodeRegistration(127.0.0.1:40605, datanodeUuid=a2049cfd-6869-4c47-8ced-371d69b74c63, infoPort=33427, infoSecurePort=0, ipcPort=42753, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:47,636 WARN [Listener at localhost/44441] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:57:47,726 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x539a914891754599: Processing first storage report for DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2 from datanode 84211755-dc60-40c5-854f-c2dd72a6328b 2023-05-27 11:57:47,726 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x539a914891754599: from storage DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2 node DatanodeRegistration(127.0.0.1:33589, datanodeUuid=84211755-dc60-40c5-854f-c2dd72a6328b, infoPort=45119, infoSecurePort=0, ipcPort=44441, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:47,726 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x539a914891754599: Processing first storage report for DS-7ff4b17e-a076-4044-a3ef-5846525c590b from datanode 84211755-dc60-40c5-854f-c2dd72a6328b 2023-05-27 11:57:47,726 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x539a914891754599: from storage DS-7ff4b17e-a076-4044-a3ef-5846525c590b node DatanodeRegistration(127.0.0.1:33589, datanodeUuid=84211755-dc60-40c5-854f-c2dd72a6328b, infoPort=45119, infoSecurePort=0, ipcPort=44441, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:57:47,746 DEBUG [Listener at localhost/44441] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed 2023-05-27 11:57:47,748 INFO [Listener at localhost/44441] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/zookeeper_0, clientPort=59984, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 11:57:47,750 INFO [Listener at localhost/44441] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59984 2023-05-27 11:57:47,750 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,751 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,763 INFO [Listener at localhost/44441] util.FSUtils(471): Created version file at hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08 with version=8 2023-05-27 11:57:47,763 INFO [Listener at localhost/44441] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase-staging 2023-05-27 11:57:47,764 INFO [Listener at localhost/44441] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:57:47,765 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:57:47,765 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:57:47,765 INFO [Listener at localhost/44441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:57:47,765 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:57:47,765 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:57:47,765 INFO [Listener at localhost/44441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:57:47,766 INFO [Listener at localhost/44441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33701 2023-05-27 11:57:47,766 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,767 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,768 INFO [Listener at localhost/44441] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33701 connecting to ZooKeeper ensemble=127.0.0.1:59984 2023-05-27 11:57:47,775 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:337010x0, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:57:47,776 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33701-0x1006c80ec670000 connected 2023-05-27 11:57:47,789 DEBUG [Listener at localhost/44441] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:57:47,790 DEBUG [Listener at localhost/44441] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:57:47,790 DEBUG [Listener at localhost/44441] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:57:47,791 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33701 2023-05-27 11:57:47,791 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33701 2023-05-27 11:57:47,791 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33701 2023-05-27 11:57:47,791 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33701 2023-05-27 11:57:47,792 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33701 2023-05-27 11:57:47,792 INFO [Listener at localhost/44441] master.HMaster(444): hbase.rootdir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08, hbase.cluster.distributed=false 2023-05-27 11:57:47,804 INFO [Listener at localhost/44441] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:57:47,804 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:57:47,805 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:57:47,805 INFO [Listener at localhost/44441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:57:47,805 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:57:47,805 INFO [Listener at localhost/44441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:57:47,805 INFO [Listener at localhost/44441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:57:47,806 INFO [Listener at localhost/44441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35345 2023-05-27 11:57:47,807 INFO [Listener at localhost/44441] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 11:57:47,807 DEBUG [Listener at localhost/44441] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 11:57:47,808 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,809 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,810 INFO [Listener at localhost/44441] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35345 connecting to ZooKeeper ensemble=127.0.0.1:59984 2023-05-27 11:57:47,814 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:353450x0, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:57:47,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35345-0x1006c80ec670001 connected 2023-05-27 11:57:47,815 DEBUG [Listener at localhost/44441] zookeeper.ZKUtil(164): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:57:47,816 DEBUG [Listener at localhost/44441] zookeeper.ZKUtil(164): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:57:47,816 DEBUG [Listener at localhost/44441] zookeeper.ZKUtil(164): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:57:47,817 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35345 2023-05-27 11:57:47,817 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35345 2023-05-27 11:57:47,817 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35345 2023-05-27 11:57:47,817 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35345 2023-05-27 11:57:47,817 DEBUG [Listener at localhost/44441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35345 2023-05-27 11:57:47,818 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,820 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:57:47,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,822 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:57:47,822 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:57:47,822 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:47,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:57:47,823 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:57:47,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33701,1685188667764 from backup master directory 2023-05-27 11:57:47,825 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,825 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:57:47,825 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:57:47,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,837 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/hbase.id with ID: bbe498c6-0064-4f8c-9191-e4dcd1a1e6a9 2023-05-27 11:57:47,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:47,851 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:47,863 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x27d7e11d to 127.0.0.1:59984 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:57:47,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@637ecaf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:57:47,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:57:47,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 11:57:47,874 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:57:47,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store-tmp 2023-05-27 11:57:47,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:47,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:57:47,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:47,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:47,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:57:47,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:47,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:57:47,884 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:57:47,884 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33701%2C1685188667764, suffix=, logDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764, archiveDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/oldWALs, maxLogs=10 2023-05-27 11:57:47,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764/jenkins-hbase4.apache.org%2C33701%2C1685188667764.1685188667887 2023-05-27 11:57:47,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] 2023-05-27 11:57:47,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:57:47,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:47,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:57:47,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:57:47,897 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:57:47,898 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 11:57:47,899 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 11:57:47,899 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:47,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:57:47,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:57:47,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:57:47,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:57:47,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=866848, jitterRate=0.10225507616996765}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:57:47,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:57:47,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 11:57:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 11:57:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 11:57:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 11:57:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 11:57:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 11:57:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 11:57:47,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 11:57:47,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 11:57:47,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 11:57:47,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 11:57:47,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 11:57:47,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 11:57:47,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 11:57:47,926 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:47,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 11:57:47,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 11:57:47,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 11:57:47,929 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:57:47,929 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:57:47,929 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:47,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33701,1685188667764, sessionid=0x1006c80ec670000, setting cluster-up flag (Was=false) 2023-05-27 11:57:47,933 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:47,939 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 11:57:47,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,943 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:47,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 11:57:47,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:47,949 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.hbase-snapshot/.tmp 2023-05-27 11:57:47,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:57:47,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:47,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685188697953 2023-05-27 11:57:47,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 11:57:47,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 11:57:47,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 11:57:47,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 11:57:47,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 11:57:47,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 11:57:47,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:47,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 11:57:47,955 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:57:47,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 11:57:47,955 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 11:57:47,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 11:57:47,956 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:57:47,957 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 11:57:47,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 11:57:47,958 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188667958,5,FailOnTimeoutGroup] 2023-05-27 11:57:47,958 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188667958,5,FailOnTimeoutGroup] 2023-05-27 11:57:47,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:47,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 11:57:47,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:47,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:47,965 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:57:47,965 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:57:47,965 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08 2023-05-27 11:57:47,975 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:47,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:57:47,984 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/info 2023-05-27 11:57:47,985 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:57:47,985 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:47,985 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:57:47,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:57:47,987 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:57:47,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:47,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:57:47,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/table 2023-05-27 11:57:47,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:57:47,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:47,990 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740 2023-05-27 11:57:47,991 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740 2023-05-27 11:57:47,993 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:57:47,994 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:57:47,998 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:57:47,999 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=730533, jitterRate=-0.07108011841773987}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:57:47,999 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:57:47,999 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:57:47,999 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:57:47,999 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:57:47,999 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:57:47,999 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:57:47,999 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:57:47,999 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:57:48,000 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:57:48,001 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 11:57:48,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 11:57:48,002 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 11:57:48,004 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 11:57:48,019 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(951): ClusterId : bbe498c6-0064-4f8c-9191-e4dcd1a1e6a9 2023-05-27 11:57:48,020 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 11:57:48,023 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 11:57:48,023 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 11:57:48,025 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 11:57:48,026 DEBUG [RS:0;jenkins-hbase4:35345] zookeeper.ReadOnlyZKClient(139): Connect 0x05843759 to 127.0.0.1:59984 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:57:48,031 DEBUG [RS:0;jenkins-hbase4:35345] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bbb331, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:57:48,031 DEBUG [RS:0;jenkins-hbase4:35345] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44b93d62, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:57:48,040 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35345 2023-05-27 11:57:48,040 INFO [RS:0;jenkins-hbase4:35345] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 11:57:48,040 INFO [RS:0;jenkins-hbase4:35345] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 11:57:48,040 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 11:57:48,041 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,33701,1685188667764 with isa=jenkins-hbase4.apache.org/172.31.14.131:35345, startcode=1685188667804 2023-05-27 11:57:48,041 DEBUG [RS:0;jenkins-hbase4:35345] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 11:57:48,044 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33937, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 11:57:48,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,046 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08 2023-05-27 11:57:48,046 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38451 2023-05-27 11:57:48,046 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 11:57:48,047 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:57:48,048 DEBUG [RS:0;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,048 WARN [RS:0;jenkins-hbase4:35345] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:57:48,048 INFO [RS:0;jenkins-hbase4:35345] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:57:48,048 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1946): logDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,049 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35345,1685188667804] 2023-05-27 11:57:48,052 DEBUG [RS:0;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,053 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 11:57:48,054 INFO [RS:0;jenkins-hbase4:35345] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 11:57:48,055 INFO [RS:0;jenkins-hbase4:35345] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 11:57:48,056 INFO [RS:0;jenkins-hbase4:35345] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:57:48,056 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,058 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 11:57:48,059 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,059 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,059 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,060 DEBUG [RS:0;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:57:48,063 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,063 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,063 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,080 INFO [RS:0;jenkins-hbase4:35345] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 11:57:48,080 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35345,1685188667804-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,096 INFO [RS:0;jenkins-hbase4:35345] regionserver.Replication(203): jenkins-hbase4.apache.org,35345,1685188667804 started 2023-05-27 11:57:48,096 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35345,1685188667804, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35345, sessionid=0x1006c80ec670001 2023-05-27 11:57:48,096 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 11:57:48,096 DEBUG [RS:0;jenkins-hbase4:35345] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,096 DEBUG [RS:0;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35345,1685188667804' 2023-05-27 11:57:48,096 DEBUG [RS:0;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:57:48,097 DEBUG [RS:0;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:57:48,098 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 11:57:48,098 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 11:57:48,098 DEBUG [RS:0;jenkins-hbase4:35345] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,098 DEBUG [RS:0;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35345,1685188667804' 2023-05-27 11:57:48,098 DEBUG [RS:0;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 11:57:48,098 DEBUG [RS:0;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 11:57:48,099 DEBUG [RS:0;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 11:57:48,099 INFO [RS:0;jenkins-hbase4:35345] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 11:57:48,099 INFO [RS:0;jenkins-hbase4:35345] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 11:57:48,154 DEBUG [jenkins-hbase4:33701] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 11:57:48,155 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35345,1685188667804, state=OPENING 2023-05-27 11:57:48,156 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 11:57:48,158 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:48,158 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35345,1685188667804}] 2023-05-27 11:57:48,158 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:57:48,201 INFO [RS:0;jenkins-hbase4:35345] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35345%2C1685188667804, suffix=, logDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804, archiveDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/oldWALs, maxLogs=32 2023-05-27 11:57:48,210 INFO [RS:0;jenkins-hbase4:35345] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 2023-05-27 11:57:48,210 DEBUG [RS:0;jenkins-hbase4:35345] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] 2023-05-27 11:57:48,313 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,313 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 11:57:48,316 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 11:57:48,319 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 11:57:48,320 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:57:48,321 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35345%2C1685188667804.meta, suffix=.meta, logDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804, archiveDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/oldWALs, maxLogs=32 2023-05-27 11:57:48,330 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.meta.1685188668322.meta 2023-05-27 11:57:48,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] 2023-05-27 11:57:48,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:57:48,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 11:57:48,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 11:57:48,330 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 11:57:48,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 11:57:48,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:48,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 11:57:48,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 11:57:48,334 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:57:48,335 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/info 2023-05-27 11:57:48,335 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/info 2023-05-27 11:57:48,335 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:57:48,336 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:48,336 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:57:48,337 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:57:48,337 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:57:48,337 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:57:48,338 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:48,338 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:57:48,339 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/table 2023-05-27 11:57:48,339 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740/table 2023-05-27 11:57:48,339 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:57:48,340 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:48,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740 2023-05-27 11:57:48,342 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/meta/1588230740 2023-05-27 11:57:48,344 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:57:48,346 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:57:48,347 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=733912, jitterRate=-0.06678269803524017}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:57:48,347 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:57:48,349 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685188668313 2023-05-27 11:57:48,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 11:57:48,353 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 11:57:48,354 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35345,1685188667804, state=OPEN 2023-05-27 11:57:48,357 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 11:57:48,357 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:57:48,359 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 11:57:48,359 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35345,1685188667804 in 199 msec 2023-05-27 11:57:48,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 11:57:48,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 358 msec 2023-05-27 11:57:48,364 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 412 msec 2023-05-27 11:57:48,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685188668364, completionTime=-1 2023-05-27 11:57:48,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 11:57:48,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 11:57:48,366 DEBUG [hconnection-0x3a9bae9c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:57:48,368 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32780, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:57:48,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 11:57:48,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685188728369 2023-05-27 11:57:48,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685188788370 2023-05-27 11:57:48,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-27 11:57:48,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33701,1685188667764-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33701,1685188667764-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33701,1685188667764-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33701, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 11:57:48,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 11:57:48,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:57:48,377 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 11:57:48,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 11:57:48,379 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:57:48,380 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:57:48,381 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,382 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e empty. 2023-05-27 11:57:48,382 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,382 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 11:57:48,395 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 11:57:48,396 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a88c452815ef6b91fa062314e704d61e, NAME => 'hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp 2023-05-27 11:57:48,404 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:48,404 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a88c452815ef6b91fa062314e704d61e, disabling compactions & flushes 2023-05-27 11:57:48,404 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,404 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,404 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. after waiting 0 ms 2023-05-27 11:57:48,404 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,404 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,404 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a88c452815ef6b91fa062314e704d61e: 2023-05-27 11:57:48,407 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:57:48,408 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188668407"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188668407"}]},"ts":"1685188668407"} 2023-05-27 11:57:48,410 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:57:48,411 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:57:48,412 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188668411"}]},"ts":"1685188668411"} 2023-05-27 11:57:48,413 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 11:57:48,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a88c452815ef6b91fa062314e704d61e, ASSIGN}] 2023-05-27 11:57:48,422 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a88c452815ef6b91fa062314e704d61e, ASSIGN 2023-05-27 11:57:48,423 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a88c452815ef6b91fa062314e704d61e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35345,1685188667804; forceNewPlan=false, retain=false 2023-05-27 11:57:48,574 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a88c452815ef6b91fa062314e704d61e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,574 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188668574"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188668574"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188668574"}]},"ts":"1685188668574"} 2023-05-27 11:57:48,576 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure a88c452815ef6b91fa062314e704d61e, server=jenkins-hbase4.apache.org,35345,1685188667804}] 2023-05-27 11:57:48,733 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a88c452815ef6b91fa062314e704d61e, NAME => 'hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:57:48,734 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,734 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:48,734 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,734 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,735 INFO [StoreOpener-a88c452815ef6b91fa062314e704d61e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,736 DEBUG [StoreOpener-a88c452815ef6b91fa062314e704d61e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e/info 2023-05-27 11:57:48,736 DEBUG [StoreOpener-a88c452815ef6b91fa062314e704d61e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e/info 2023-05-27 11:57:48,737 INFO [StoreOpener-a88c452815ef6b91fa062314e704d61e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a88c452815ef6b91fa062314e704d61e columnFamilyName info 2023-05-27 11:57:48,737 INFO [StoreOpener-a88c452815ef6b91fa062314e704d61e-1] regionserver.HStore(310): Store=a88c452815ef6b91fa062314e704d61e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:48,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a88c452815ef6b91fa062314e704d61e 2023-05-27 11:57:48,742 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/hbase/namespace/a88c452815ef6b91fa062314e704d61e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:57:48,743 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a88c452815ef6b91fa062314e704d61e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=871190, jitterRate=0.10777555406093597}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:57:48,743 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a88c452815ef6b91fa062314e704d61e: 2023-05-27 11:57:48,746 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e., pid=6, masterSystemTime=1685188668729 2023-05-27 11:57:48,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,748 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:57:48,749 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a88c452815ef6b91fa062314e704d61e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:48,749 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188668749"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188668749"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188668749"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188668749"}]},"ts":"1685188668749"} 2023-05-27 11:57:48,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 11:57:48,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure a88c452815ef6b91fa062314e704d61e, server=jenkins-hbase4.apache.org,35345,1685188667804 in 175 msec 2023-05-27 11:57:48,756 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 11:57:48,758 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a88c452815ef6b91fa062314e704d61e, ASSIGN in 334 msec 2023-05-27 11:57:48,760 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:57:48,760 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188668760"}]},"ts":"1685188668760"} 2023-05-27 11:57:48,761 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 11:57:48,764 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:57:48,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 388 msec 2023-05-27 11:57:48,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 11:57:48,779 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:57:48,779 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:48,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 11:57:48,792 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:57:48,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-27 11:57:48,805 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 11:57:48,812 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:57:48,815 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-27 11:57:48,830 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 11:57:48,832 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 11:57:48,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.007sec 2023-05-27 11:57:48,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 11:57:48,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 11:57:48,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 11:57:48,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33701,1685188667764-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 11:57:48,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33701,1685188667764-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 11:57:48,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 11:57:48,920 DEBUG [Listener at localhost/44441] zookeeper.ReadOnlyZKClient(139): Connect 0x17e0aeb9 to 127.0.0.1:59984 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:57:48,925 DEBUG [Listener at localhost/44441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cb40a7a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:57:48,926 DEBUG [hconnection-0xd7ff080-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:57:48,928 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32782, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:57:48,929 INFO [Listener at localhost/44441] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:57:48,930 INFO [Listener at localhost/44441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:57:48,933 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 11:57:48,933 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:57:48,934 INFO [Listener at localhost/44441] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 11:57:48,934 INFO [Listener at localhost/44441] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-27 11:57:48,934 INFO [Listener at localhost/44441] wal.TestLogRolling(432): Replication=2 2023-05-27 11:57:48,936 DEBUG [Listener at localhost/44441] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 11:57:48,939 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34532, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 11:57:48,940 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 11:57:48,941 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 11:57:48,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:57:48,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-27 11:57:48,944 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:57:48,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-27 11:57:48,945 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:57:48,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:57:48,947 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:48,948 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e empty. 2023-05-27 11:57:48,948 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:48,948 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-27 11:57:48,959 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-27 11:57:48,960 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 565a2df624a6368b84694a6c62f75c2e, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/.tmp 2023-05-27 11:57:48,968 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:48,968 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 565a2df624a6368b84694a6c62f75c2e, disabling compactions & flushes 2023-05-27 11:57:48,968 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:48,968 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:48,968 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. after waiting 0 ms 2023-05-27 11:57:48,968 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:48,968 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:48,968 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 565a2df624a6368b84694a6c62f75c2e: 2023-05-27 11:57:48,970 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:57:48,971 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685188668971"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188668971"}]},"ts":"1685188668971"} 2023-05-27 11:57:48,973 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:57:48,974 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:57:48,974 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188668974"}]},"ts":"1685188668974"} 2023-05-27 11:57:48,976 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-27 11:57:48,980 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=565a2df624a6368b84694a6c62f75c2e, ASSIGN}] 2023-05-27 11:57:48,982 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=565a2df624a6368b84694a6c62f75c2e, ASSIGN 2023-05-27 11:57:48,983 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=565a2df624a6368b84694a6c62f75c2e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35345,1685188667804; forceNewPlan=false, retain=false 2023-05-27 11:57:49,134 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=565a2df624a6368b84694a6c62f75c2e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:49,134 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685188669134"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188669134"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188669134"}]},"ts":"1685188669134"} 2023-05-27 11:57:49,136 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 565a2df624a6368b84694a6c62f75c2e, server=jenkins-hbase4.apache.org,35345,1685188667804}] 2023-05-27 11:57:49,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:49,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 565a2df624a6368b84694a6c62f75c2e, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:57:49,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:57:49,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,295 INFO [StoreOpener-565a2df624a6368b84694a6c62f75c2e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,296 DEBUG [StoreOpener-565a2df624a6368b84694a6c62f75c2e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e/info 2023-05-27 11:57:49,296 DEBUG [StoreOpener-565a2df624a6368b84694a6c62f75c2e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e/info 2023-05-27 11:57:49,297 INFO [StoreOpener-565a2df624a6368b84694a6c62f75c2e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 565a2df624a6368b84694a6c62f75c2e columnFamilyName info 2023-05-27 11:57:49,297 INFO [StoreOpener-565a2df624a6368b84694a6c62f75c2e-1] regionserver.HStore(310): Store=565a2df624a6368b84694a6c62f75c2e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:57:49,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:57:49,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/data/default/TestLogRolling-testLogRollOnPipelineRestart/565a2df624a6368b84694a6c62f75c2e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:57:49,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 565a2df624a6368b84694a6c62f75c2e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=688312, jitterRate=-0.12476666271686554}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:57:49,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 565a2df624a6368b84694a6c62f75c2e: 2023-05-27 11:57:49,304 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e., pid=11, masterSystemTime=1685188669289 2023-05-27 11:57:49,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:49,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:57:49,307 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=565a2df624a6368b84694a6c62f75c2e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:57:49,307 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685188669306"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188669306"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188669306"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188669306"}]},"ts":"1685188669306"} 2023-05-27 11:57:49,311 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 11:57:49,311 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 565a2df624a6368b84694a6c62f75c2e, server=jenkins-hbase4.apache.org,35345,1685188667804 in 173 msec 2023-05-27 11:57:49,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 11:57:49,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=565a2df624a6368b84694a6c62f75c2e, ASSIGN in 331 msec 2023-05-27 11:57:49,314 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:57:49,314 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188669314"}]},"ts":"1685188669314"} 2023-05-27 11:57:49,315 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-27 11:57:49,318 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:57:49,319 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 377 msec 2023-05-27 11:57:51,656 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 11:57:54,054 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-27 11:57:58,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:57:58,947 INFO [Listener at localhost/44441] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-27 11:57:58,949 DEBUG [Listener at localhost/44441] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-27 11:57:58,949 DEBUG [Listener at localhost/44441] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:00,955 INFO [Listener at localhost/44441] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 2023-05-27 11:58:00,956 WARN [Listener at localhost/44441] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:58:00,958 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:00,958 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:00,959 WARN [DataStreamer for file /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764/jenkins-hbase4.apache.org%2C33701%2C1685188667764.1685188667887 block BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]) is bad. 2023-05-27 11:58:00,959 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 11:58:00,959 WARN [DataStreamer for file /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 block BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]) is bad. 2023-05-27 11:58:00,960 WARN [DataStreamer for file /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.meta.1685188668322.meta block BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33589,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]) is bad. 2023-05-27 11:58:01,018 WARN [PacketResponder: BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33589]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,020 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:50918 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50918 dst: /127.0.0.1:40605 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,028 INFO [Listener at localhost/44441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:01,037 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1891407603_17 at /127.0.0.1:50894 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50894 dst: /127.0.0.1:40605 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40605 remote=/127.0.0.1:50894]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,038 WARN [PacketResponder: BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40605]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,038 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:50916 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50916 dst: /127.0.0.1:40605 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40605 remote=/127.0.0.1:50916]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,038 WARN [PacketResponder: BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40605]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,039 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:49704 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33589:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49704 dst: /127.0.0.1:33589 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,041 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1891407603_17 at /127.0.0.1:49694 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33589:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49694 dst: /127.0.0.1:33589 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,132 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:49716 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33589:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49716 dst: /127.0.0.1:33589 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,133 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:58:01,134 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757707561-172.31.14.131-1685188667165 (Datanode Uuid 84211755-dc60-40c5-854f-c2dd72a6328b) service to localhost/127.0.0.1:38451 2023-05-27 11:58:01,135 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data3/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:01,135 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:01,141 WARN [Listener at localhost/44441] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:58:01,143 WARN [Listener at localhost/44441] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:01,144 INFO [Listener at localhost/44441] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:01,149 INFO [Listener at localhost/44441] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_35409_datanode____4lkpaa/webapp 2023-05-27 11:58:01,238 INFO [Listener at localhost/44441] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35409 2023-05-27 11:58:01,244 WARN [Listener at localhost/34959] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:01,249 WARN [Listener at localhost/34959] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:58:01,249 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:01,250 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:01,250 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:01,255 INFO [Listener at localhost/34959] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:01,314 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x63a12ff83b707914: Processing first storage report for DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2 from datanode 84211755-dc60-40c5-854f-c2dd72a6328b 2023-05-27 11:58:01,314 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x63a12ff83b707914: from storage DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2 node DatanodeRegistration(127.0.0.1:44989, datanodeUuid=84211755-dc60-40c5-854f-c2dd72a6328b, infoPort=36063, infoSecurePort=0, ipcPort=34959, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:01,314 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x63a12ff83b707914: Processing first storage report for DS-7ff4b17e-a076-4044-a3ef-5846525c590b from datanode 84211755-dc60-40c5-854f-c2dd72a6328b 2023-05-27 11:58:01,315 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x63a12ff83b707914: from storage DS-7ff4b17e-a076-4044-a3ef-5846525c590b node DatanodeRegistration(127.0.0.1:44989, datanodeUuid=84211755-dc60-40c5-854f-c2dd72a6328b, infoPort=36063, infoSecurePort=0, ipcPort=34959, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:01,358 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:40996 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40996 dst: /127.0.0.1:40605 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,360 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:58:01,359 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:40984 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40984 dst: /127.0.0.1:40605 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,359 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1891407603_17 at /127.0.0.1:40972 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40972 dst: /127.0.0.1:40605 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:01,360 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757707561-172.31.14.131-1685188667165 (Datanode Uuid a2049cfd-6869-4c47-8ced-371d69b74c63) service to localhost/127.0.0.1:38451 2023-05-27 11:58:01,363 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:01,363 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data2/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:01,370 WARN [Listener at localhost/34959] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:58:01,372 WARN [Listener at localhost/34959] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:01,373 INFO [Listener at localhost/34959] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:01,378 INFO [Listener at localhost/34959] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_43231_datanode____7ql9co/webapp 2023-05-27 11:58:01,470 INFO [Listener at localhost/34959] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43231 2023-05-27 11:58:01,478 WARN [Listener at localhost/33595] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:01,554 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x534b87901447a827: Processing first storage report for DS-63cd0c31-927a-466e-974e-9bdb4955e2c0 from datanode a2049cfd-6869-4c47-8ced-371d69b74c63 2023-05-27 11:58:01,554 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x534b87901447a827: from storage DS-63cd0c31-927a-466e-974e-9bdb4955e2c0 node DatanodeRegistration(127.0.0.1:40387, datanodeUuid=a2049cfd-6869-4c47-8ced-371d69b74c63, infoPort=45017, infoSecurePort=0, ipcPort=33595, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:01,554 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x534b87901447a827: Processing first storage report for DS-91e2e317-3cb9-4494-ab50-d7217a624b64 from datanode a2049cfd-6869-4c47-8ced-371d69b74c63 2023-05-27 11:58:01,554 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x534b87901447a827: from storage DS-91e2e317-3cb9-4494-ab50-d7217a624b64 node DatanodeRegistration(127.0.0.1:40387, datanodeUuid=a2049cfd-6869-4c47-8ced-371d69b74c63, infoPort=45017, infoSecurePort=0, ipcPort=33595, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 11:58:02,481 INFO [Listener at localhost/33595] wal.TestLogRolling(481): Data Nodes restarted 2023-05-27 11:58:02,483 INFO [Listener at localhost/33595] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-27 11:58:02,484 WARN [RS:0;jenkins-hbase4:35345.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:02,486 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35345%2C1685188667804:(num 1685188668201) roll requested 2023-05-27 11:58:02,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35345] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:02,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35345] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:32782 deadline: 1685188692483, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-27 11:58:02,494 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 newFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 2023-05-27 11:58:02,494 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-27 11:58:02,494 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 2023-05-27 11:58:02,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40387,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK], DatanodeInfoWithStorage[127.0.0.1:44989,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] 2023-05-27 11:58:02,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 is not closed yet, will try archiving it next time 2023-05-27 11:58:02,495 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:02,495 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:14,544 INFO [Listener at localhost/33595] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-27 11:58:16,546 WARN [Listener at localhost/33595] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:58:16,547 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:16,548 WARN [DataStreamer for file /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 block BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40387,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK], DatanodeInfoWithStorage[127.0.0.1:44989,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40387,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]) is bad. 2023-05-27 11:58:16,552 INFO [Listener at localhost/33595] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:16,553 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:54032 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:44989:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54032 dst: /127.0.0.1:44989 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44989 remote=/127.0.0.1:54032]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:16,553 WARN [PacketResponder: BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44989]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:16,554 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:60884 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40387:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60884 dst: /127.0.0.1:40387 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:16,554 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757707561-172.31.14.131-1685188667165 (Datanode Uuid a2049cfd-6869-4c47-8ced-371d69b74c63) service to localhost/127.0.0.1:38451 2023-05-27 11:58:16,555 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:16,555 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data2/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:16,662 WARN [Listener at localhost/33595] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:58:16,664 WARN [Listener at localhost/33595] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:16,666 INFO [Listener at localhost/33595] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:16,670 INFO [Listener at localhost/33595] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_45591_datanode____fhd61f/webapp 2023-05-27 11:58:16,760 INFO [Listener at localhost/33595] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45591 2023-05-27 11:58:16,769 WARN [Listener at localhost/45923] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:16,772 WARN [Listener at localhost/45923] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:58:16,772 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:16,777 INFO [Listener at localhost/45923] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:16,835 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd60df1594a6d7789: Processing first storage report for DS-63cd0c31-927a-466e-974e-9bdb4955e2c0 from datanode a2049cfd-6869-4c47-8ced-371d69b74c63 2023-05-27 11:58:16,835 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd60df1594a6d7789: from storage DS-63cd0c31-927a-466e-974e-9bdb4955e2c0 node DatanodeRegistration(127.0.0.1:33835, datanodeUuid=a2049cfd-6869-4c47-8ced-371d69b74c63, infoPort=34405, infoSecurePort=0, ipcPort=45923, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:16,835 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd60df1594a6d7789: Processing first storage report for DS-91e2e317-3cb9-4494-ab50-d7217a624b64 from datanode a2049cfd-6869-4c47-8ced-371d69b74c63 2023-05-27 11:58:16,835 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd60df1594a6d7789: from storage DS-91e2e317-3cb9-4494-ab50-d7217a624b64 node DatanodeRegistration(127.0.0.1:33835, datanodeUuid=a2049cfd-6869-4c47-8ced-371d69b74c63, infoPort=34405, infoSecurePort=0, ipcPort=45923, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:16,881 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_779556912_17 at /127.0.0.1:44272 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:44989:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44272 dst: /127.0.0.1:44989 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:16,882 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:58:16,882 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757707561-172.31.14.131-1685188667165 (Datanode Uuid 84211755-dc60-40c5-854f-c2dd72a6328b) service to localhost/127.0.0.1:38451 2023-05-27 11:58:16,883 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data3/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:16,883 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:16,890 WARN [Listener at localhost/45923] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:58:16,892 WARN [Listener at localhost/45923] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:16,893 INFO [Listener at localhost/45923] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:16,898 INFO [Listener at localhost/45923] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/java.io.tmpdir/Jetty_localhost_45441_datanode____wcsh9z/webapp 2023-05-27 11:58:16,987 INFO [Listener at localhost/45923] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45441 2023-05-27 11:58:16,995 WARN [Listener at localhost/35527] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:17,063 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3835f8bb34651cee: Processing first storage report for DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2 from datanode 84211755-dc60-40c5-854f-c2dd72a6328b 2023-05-27 11:58:17,063 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3835f8bb34651cee: from storage DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2 node DatanodeRegistration(127.0.0.1:46105, datanodeUuid=84211755-dc60-40c5-854f-c2dd72a6328b, infoPort=43961, infoSecurePort=0, ipcPort=35527, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:17,063 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3835f8bb34651cee: Processing first storage report for DS-7ff4b17e-a076-4044-a3ef-5846525c590b from datanode 84211755-dc60-40c5-854f-c2dd72a6328b 2023-05-27 11:58:17,063 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3835f8bb34651cee: from storage DS-7ff4b17e-a076-4044-a3ef-5846525c590b node DatanodeRegistration(127.0.0.1:46105, datanodeUuid=84211755-dc60-40c5-854f-c2dd72a6328b, infoPort=43961, infoSecurePort=0, ipcPort=35527, storageInfo=lv=-57;cid=testClusterID;nsid=682663649;c=1685188667165), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:17,955 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:17,955 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C33701%2C1685188667764:(num 1685188667887) roll requested 2023-05-27 11:58:17,955 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:17,955 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:17,962 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-27 11:58:17,962 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764/jenkins-hbase4.apache.org%2C33701%2C1685188667764.1685188667887 with entries=88, filesize=43.79 KB; new WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764/jenkins-hbase4.apache.org%2C33701%2C1685188667764.1685188697955 2023-05-27 11:58:17,962 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46105,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:33835,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] 2023-05-27 11:58:17,962 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:17,962 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764/jenkins-hbase4.apache.org%2C33701%2C1685188667764.1685188667887 is not closed yet, will try archiving it next time 2023-05-27 11:58:17,962 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764/jenkins-hbase4.apache.org%2C33701%2C1685188667764.1685188667887; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:17,999 INFO [Listener at localhost/35527] wal.TestLogRolling(498): Data Nodes restarted 2023-05-27 11:58:18,000 INFO [Listener at localhost/35527] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-27 11:58:18,001 WARN [RS:0;jenkins-hbase4:35345.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44989,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:18,002 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35345%2C1685188667804:(num 1685188682486) roll requested 2023-05-27 11:58:18,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35345] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44989,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:18,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35345] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:32782 deadline: 1685188708001, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-27 11:58:18,013 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 newFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 2023-05-27 11:58:18,013 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-27 11:58:18,013 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 2023-05-27 11:58:18,013 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46105,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:33835,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] 2023-05-27 11:58:18,013 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44989,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:18,013 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 is not closed yet, will try archiving it next time 2023-05-27 11:58:18,013 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44989,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:30,048 DEBUG [Listener at localhost/35527] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 newFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 2023-05-27 11:58:30,049 INFO [Listener at localhost/35527] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 2023-05-27 11:58:30,053 DEBUG [Listener at localhost/35527] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33835,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK], DatanodeInfoWithStorage[127.0.0.1:46105,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]] 2023-05-27 11:58:30,053 DEBUG [Listener at localhost/35527] wal.AbstractFSWAL(716): hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 is not closed yet, will try archiving it next time 2023-05-27 11:58:30,053 DEBUG [Listener at localhost/35527] wal.TestLogRolling(512): recovering lease for hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 2023-05-27 11:58:30,054 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 2023-05-27 11:58:30,057 WARN [IPC Server handler 3 on default port 38451] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1015 2023-05-27 11:58:30,059 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 after 5ms 2023-05-27 11:58:30,858 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1e2408d4] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1757707561-172.31.14.131-1685188667165:blk_1073741832_1015, datanode=DatanodeInfoWithStorage[127.0.0.1:46105,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current/BP-1757707561-172.31.14.131-1685188667165/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current/BP-1757707561-172.31.14.131-1685188667165/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-27 11:58:34,060 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 after 4006ms 2023-05-27 11:58:34,060 DEBUG [Listener at localhost/35527] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188668201 2023-05-27 11:58:34,071 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685188668743/Put/vlen=175/seqid=0] 2023-05-27 11:58:34,071 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #4: [default/info:d/1685188668788/Put/vlen=9/seqid=0] 2023-05-27 11:58:34,071 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #5: [hbase/info:d/1685188668809/Put/vlen=7/seqid=0] 2023-05-27 11:58:34,071 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685188669303/Put/vlen=231/seqid=0] 2023-05-27 11:58:34,071 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #4: [row1002/info:/1685188678953/Put/vlen=1045/seqid=0] 2023-05-27 11:58:34,071 DEBUG [Listener at localhost/35527] wal.ProtobufLogReader(420): EOF at position 2160 2023-05-27 11:58:34,072 DEBUG [Listener at localhost/35527] wal.TestLogRolling(512): recovering lease for hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 2023-05-27 11:58:34,072 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 2023-05-27 11:58:34,072 WARN [IPC Server handler 3 on default port 38451] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-27 11:58:34,072 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 after 0ms 2023-05-27 11:58:35,068 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@25bdea8e] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1757707561-172.31.14.131-1685188667165:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:33835,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current/BP-1757707561-172.31.14.131-1685188667165/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current/BP-1757707561-172.31.14.131-1685188667165/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-27 11:58:38,073 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 after 4001ms 2023-05-27 11:58:38,073 DEBUG [Listener at localhost/35527] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188682486 2023-05-27 11:58:38,077 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #6: [row1003/info:/1685188692541/Put/vlen=1045/seqid=0] 2023-05-27 11:58:38,077 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #7: [row1004/info:/1685188694545/Put/vlen=1045/seqid=0] 2023-05-27 11:58:38,077 DEBUG [Listener at localhost/35527] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-27 11:58:38,077 DEBUG [Listener at localhost/35527] wal.TestLogRolling(512): recovering lease for hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 2023-05-27 11:58:38,077 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 2023-05-27 11:58:38,078 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 after 1ms 2023-05-27 11:58:38,078 DEBUG [Listener at localhost/35527] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188698002 2023-05-27 11:58:38,081 DEBUG [Listener at localhost/35527] wal.TestLogRolling(522): #9: [row1005/info:/1685188708036/Put/vlen=1045/seqid=0] 2023-05-27 11:58:38,081 DEBUG [Listener at localhost/35527] wal.TestLogRolling(512): recovering lease for hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 2023-05-27 11:58:38,081 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 2023-05-27 11:58:38,081 WARN [IPC Server handler 0 on default port 38451] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-27 11:58:38,081 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 after 0ms 2023-05-27 11:58:39,067 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1891407603_17 at /127.0.0.1:50322 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:33835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50322 dst: /127.0.0.1:33835 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33835 remote=/127.0.0.1:50322]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:39,068 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1891407603_17 at /127.0.0.1:34160 [Receiving block BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:46105:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34160 dst: /127.0.0.1:46105 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:39,068 WARN [ResponseProcessor for block BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 11:58:39,069 WARN [DataStreamer for file /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 block BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33835,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK], DatanodeInfoWithStorage[127.0.0.1:46105,DS-bd22021e-38c6-44ba-9fbd-6ac4ccd3dcb2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33835,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]) is bad. 2023-05-27 11:58:39,074 WARN [DataStreamer for file /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 block BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,082 INFO [Listener at localhost/35527] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 after 4001ms 2023-05-27 11:58:42,082 DEBUG [Listener at localhost/35527] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 2023-05-27 11:58:42,086 DEBUG [Listener at localhost/35527] wal.ProtobufLogReader(420): EOF at position 83 2023-05-27 11:58:42,087 INFO [Listener at localhost/35527] regionserver.HRegion(2745): Flushing 565a2df624a6368b84694a6c62f75c2e 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-27 11:58:42,088 WARN [RS:0;jenkins-hbase4:35345.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,088 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35345%2C1685188667804:(num 1685188710039) roll requested 2023-05-27 11:58:42,088 DEBUG [Listener at localhost/35527] regionserver.HRegion(2446): Flush status journal for 565a2df624a6368b84694a6c62f75c2e: 2023-05-27 11:58:42,088 INFO [Listener at localhost/35527] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,090 INFO [Listener at localhost/35527] regionserver.HRegion(2745): Flushing a88c452815ef6b91fa062314e704d61e 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 11:58:42,090 DEBUG [Listener at localhost/35527] regionserver.HRegion(2446): Flush status journal for a88c452815ef6b91fa062314e704d61e: 2023-05-27 11:58:42,090 INFO [Listener at localhost/35527] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,091 INFO [Listener at localhost/35527] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-05-27 11:58:42,091 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,092 DEBUG [Listener at localhost/35527] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-27 11:58:42,092 INFO [Listener at localhost/35527] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,096 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 11:58:42,097 INFO [Listener at localhost/35527] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 11:58:42,099 DEBUG [Listener at localhost/35527] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x17e0aeb9 to 127.0.0.1:59984 2023-05-27 11:58:42,099 DEBUG [Listener at localhost/35527] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:58:42,099 DEBUG [Listener at localhost/35527] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 11:58:42,099 DEBUG [Listener at localhost/35527] util.JVMClusterUtil(257): Found active master hash=1632160968, stopped=false 2023-05-27 11:58:42,099 INFO [Listener at localhost/35527] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:58:42,103 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:58:42,103 INFO [Listener at localhost/35527] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 11:58:42,103 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:58:42,103 DEBUG [Listener at localhost/35527] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27d7e11d to 127.0.0.1:59984 2023-05-27 11:58:42,103 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:42,103 DEBUG [Listener at localhost/35527] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:58:42,103 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:58:42,103 INFO [Listener at localhost/35527] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35345,1685188667804' ***** 2023-05-27 11:58:42,103 INFO [Listener at localhost/35527] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 11:58:42,103 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:58:42,107 INFO [RS:0;jenkins-hbase4:35345] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 11:58:42,107 INFO [RS:0;jenkins-hbase4:35345] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 11:58:42,107 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 11:58:42,107 INFO [RS:0;jenkins-hbase4:35345] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 11:58:42,108 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(3303): Received CLOSE for 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:58:42,108 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(3303): Received CLOSE for a88c452815ef6b91fa062314e704d61e 2023-05-27 11:58:42,108 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:58:42,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 565a2df624a6368b84694a6c62f75c2e, disabling compactions & flushes 2023-05-27 11:58:42,108 DEBUG [RS:0;jenkins-hbase4:35345] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x05843759 to 127.0.0.1:59984 2023-05-27 11:58:42,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,108 DEBUG [RS:0;jenkins-hbase4:35345] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:58:42,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,108 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 newFile=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188722088 2023-05-27 11:58:42,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. after waiting 0 ms 2023-05-27 11:58:42,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,108 INFO [RS:0;jenkins-hbase4:35345] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 11:58:42,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 565a2df624a6368b84694a6c62f75c2e 1/1 column families, dataSize=4.20 KB heapSize=4.98 KB 2023-05-27 11:58:42,108 INFO [RS:0;jenkins-hbase4:35345] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 11:58:42,109 INFO [RS:0;jenkins-hbase4:35345] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 11:58:42,108 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-27 11:58:42,109 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 11:58:42,109 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188722088 2023-05-27 11:58:42,109 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-05-27 11:58:42,109 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:58:42,109 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039 failed. Cause="Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-27 11:58:42,109 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-27 11:58:42,109 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,109 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:58:42,109 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1478): Online Regions={565a2df624a6368b84694a6c62f75c2e=TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e., a88c452815ef6b91fa062314e704d61e=hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e., 1588230740=hbase:meta,,1.1588230740} 2023-05-27 11:58:42,109 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804/jenkins-hbase4.apache.org%2C35345%2C1685188667804.1685188710039, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1757707561-172.31.14.131-1685188667165:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,109 WARN [RS:0;jenkins-hbase4:35345.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=13, requesting roll of WAL java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.doFlush(CodedOutputStream.java:3041) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.flushIfNotAvailable(CodedOutputStream.java:3036) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeUInt64(CodedOutputStream.java:2726) at org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALKey.writeTo(WALProtos.java:1878) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:95) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:55) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:329) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:1105) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1199) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:42,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:58:42,110 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:58:42,109 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1504): Waiting on 1588230740, 565a2df624a6368b84694a6c62f75c2e, a88c452815ef6b91fa062314e704d61e 2023-05-27 11:58:42,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 565a2df624a6368b84694a6c62f75c2e: 2023-05-27 11:58:42,110 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,110 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:58:42,111 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,35345,1685188667804: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=13, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.doFlush(CodedOutputStream.java:3041) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.flushIfNotAvailable(CodedOutputStream.java:3036) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeUInt64(CodedOutputStream.java:2726) at org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALKey.writeTo(WALProtos.java:1878) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:95) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:55) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:329) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:1105) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1199) ... 5 more 2023-05-27 11:58:42,111 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-63cd0c31-927a-466e-974e-9bdb4955e2c0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 11:58:42,111 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:58:42,112 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-27 11:58:42,112 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:58:42,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-27 11:58:42,112 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 11:58:42,112 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/WALs/jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:58:42,113 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-27 11:58:42,113 DEBUG [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-05-27 11:58:42,113 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35345%2C1685188667804.meta:.meta(num 1685188668322) roll requested 2023-05-27 11:58:42,113 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-27 11:58:42,113 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35345%2C1685188667804:(num 1685188722088) roll requested 2023-05-27 11:58:42,113 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-27 11:58:42,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-27 11:58:42,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-27 11:58:42,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-27 11:58:42,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1056440320, "init": 513802240, "max": 2051014656, "used": 412634984 }, "NonHeapMemoryUsage": { "committed": 139223040, "init": 2555904, "max": -1, "used": 136655080 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-27 11:58:42,114 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33701] master.MasterRpcServices(609): jenkins-hbase4.apache.org,35345,1685188667804 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,35345,1685188667804: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=13, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.doFlush(CodedOutputStream.java:3041) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.flushIfNotAvailable(CodedOutputStream.java:3036) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeUInt64(CodedOutputStream.java:2726) at org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALKey.writeTo(WALProtos.java:1878) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:95) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:55) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:329) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:1105) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1199) ... 5 more 2023-05-27 11:58:42,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a88c452815ef6b91fa062314e704d61e, disabling compactions & flushes 2023-05-27 11:58:42,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. after waiting 0 ms 2023-05-27 11:58:42,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a88c452815ef6b91fa062314e704d61e: 2023-05-27 11:58:42,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,141 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-27 11:58:42,141 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-27 11:58:42,311 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(3303): Received CLOSE for 565a2df624a6368b84694a6c62f75c2e 2023-05-27 11:58:42,311 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(3303): Received CLOSE for a88c452815ef6b91fa062314e704d61e 2023-05-27 11:58:42,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 565a2df624a6368b84694a6c62f75c2e, disabling compactions & flushes 2023-05-27 11:58:42,311 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 11:58:42,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:58:42,312 DEBUG [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1504): Waiting on 1588230740, 565a2df624a6368b84694a6c62f75c2e, a88c452815ef6b91fa062314e704d61e 2023-05-27 11:58:42,312 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. after waiting 0 ms 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 565a2df624a6368b84694a6c62f75c2e: 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685188668940.565a2df624a6368b84694a6c62f75c2e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a88c452815ef6b91fa062314e704d61e, disabling compactions & flushes 2023-05-27 11:58:42,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. after waiting 0 ms 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a88c452815ef6b91fa062314e704d61e: 2023-05-27 11:58:42,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685188668376.a88c452815ef6b91fa062314e704d61e. 2023-05-27 11:58:42,512 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-27 11:58:42,512 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35345,1685188667804; all regions closed. 2023-05-27 11:58:42,512 DEBUG [RS:0;jenkins-hbase4:35345] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:58:42,512 INFO [RS:0;jenkins-hbase4:35345] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:58:42,512 INFO [RS:0;jenkins-hbase4:35345] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-27 11:58:42,512 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:58:42,513 INFO [RS:0;jenkins-hbase4:35345] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35345 2023-05-27 11:58:42,516 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35345,1685188667804 2023-05-27 11:58:42,516 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:58:42,516 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:58:42,517 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35345,1685188667804] 2023-05-27 11:58:42,517 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35345,1685188667804; numProcessing=1 2023-05-27 11:58:42,518 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35345,1685188667804 already deleted, retry=false 2023-05-27 11:58:42,518 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35345,1685188667804 expired; onlineServers=0 2023-05-27 11:58:42,518 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33701,1685188667764' ***** 2023-05-27 11:58:42,518 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 11:58:42,518 DEBUG [M:0;jenkins-hbase4:33701] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65775439, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:58:42,518 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:58:42,518 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33701,1685188667764; all regions closed. 2023-05-27 11:58:42,518 DEBUG [M:0;jenkins-hbase4:33701] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:58:42,518 DEBUG [M:0;jenkins-hbase4:33701] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 11:58:42,518 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 11:58:42,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188667958] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188667958,5,FailOnTimeoutGroup] 2023-05-27 11:58:42,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188667958] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188667958,5,FailOnTimeoutGroup] 2023-05-27 11:58:42,518 DEBUG [M:0;jenkins-hbase4:33701] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 11:58:42,520 INFO [M:0;jenkins-hbase4:33701] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 11:58:42,520 INFO [M:0;jenkins-hbase4:33701] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 11:58:42,520 INFO [M:0;jenkins-hbase4:33701] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 11:58:42,520 DEBUG [M:0;jenkins-hbase4:33701] master.HMaster(1512): Stopping service threads 2023-05-27 11:58:42,520 INFO [M:0;jenkins-hbase4:33701] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 11:58:42,520 ERROR [M:0;jenkins-hbase4:33701] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 11:58:42,520 INFO [M:0;jenkins-hbase4:33701] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 11:58:42,520 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 11:58:42,520 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 11:58:42,521 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:42,521 DEBUG [M:0;jenkins-hbase4:33701] zookeeper.ZKUtil(398): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 11:58:42,521 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:58:42,521 WARN [M:0;jenkins-hbase4:33701] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 11:58:42,521 INFO [M:0;jenkins-hbase4:33701] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 11:58:42,521 INFO [M:0;jenkins-hbase4:33701] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 11:58:42,522 DEBUG [M:0;jenkins-hbase4:33701] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:58:42,522 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:42,522 DEBUG [M:0;jenkins-hbase4:33701] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:42,522 DEBUG [M:0;jenkins-hbase4:33701] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:58:42,522 DEBUG [M:0;jenkins-hbase4:33701] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:42,522 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.16 KB heapSize=45.78 KB 2023-05-27 11:58:42,534 INFO [M:0;jenkins-hbase4:33701] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.16 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/406bdfb5c82c465ab730435d9fe5b512 2023-05-27 11:58:42,540 DEBUG [M:0;jenkins-hbase4:33701] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/406bdfb5c82c465ab730435d9fe5b512 as hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/406bdfb5c82c465ab730435d9fe5b512 2023-05-27 11:58:42,545 INFO [M:0;jenkins-hbase4:33701] regionserver.HStore(1080): Added hdfs://localhost:38451/user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/406bdfb5c82c465ab730435d9fe5b512, entries=11, sequenceid=92, filesize=7.0 K 2023-05-27 11:58:42,547 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegion(2948): Finished flush of dataSize ~38.16 KB/39075, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=92, compaction requested=false 2023-05-27 11:58:42,548 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:42,548 DEBUG [M:0;jenkins-hbase4:33701] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:58:42,550 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e392aa9d-ec2f-30ea-3494-59c2efdbbb08/MasterData/WALs/jenkins-hbase4.apache.org,33701,1685188667764 2023-05-27 11:58:42,554 INFO [M:0;jenkins-hbase4:33701] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 11:58:42,554 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:58:42,555 INFO [M:0;jenkins-hbase4:33701] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33701 2023-05-27 11:58:42,558 DEBUG [M:0;jenkins-hbase4:33701] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33701,1685188667764 already deleted, retry=false 2023-05-27 11:58:42,617 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:58:42,617 INFO [RS:0;jenkins-hbase4:35345] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35345,1685188667804; zookeeper connection closed. 2023-05-27 11:58:42,617 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1006c80ec670001, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:58:42,618 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5b43b6d6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5b43b6d6 2023-05-27 11:58:42,620 INFO [Listener at localhost/35527] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 11:58:42,717 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:58:42,717 INFO [M:0;jenkins-hbase4:33701] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33701,1685188667764; zookeeper connection closed. 2023-05-27 11:58:42,717 DEBUG [Listener at localhost/44441-EventThread] zookeeper.ZKWatcher(600): master:33701-0x1006c80ec670000, quorum=127.0.0.1:59984, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:58:42,718 WARN [Listener at localhost/35527] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:58:42,723 INFO [Listener at localhost/35527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:42,827 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:58:42,827 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757707561-172.31.14.131-1685188667165 (Datanode Uuid 84211755-dc60-40c5-854f-c2dd72a6328b) service to localhost/127.0.0.1:38451 2023-05-27 11:58:42,828 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data3/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:42,828 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data4/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:42,830 WARN [Listener at localhost/35527] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:58:42,833 INFO [Listener at localhost/35527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:42,837 WARN [BP-1757707561-172.31.14.131-1685188667165 heartbeating to localhost/127.0.0.1:38451] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757707561-172.31.14.131-1685188667165 (Datanode Uuid a2049cfd-6869-4c47-8ced-371d69b74c63) service to localhost/127.0.0.1:38451 2023-05-27 11:58:42,837 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data1/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:42,838 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/cluster_3a3c757a-6910-e71a-c772-39bf10ca3c43/dfs/data/data2/current/BP-1757707561-172.31.14.131-1685188667165] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:58:42,946 INFO [Listener at localhost/35527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:58:43,057 INFO [Listener at localhost/35527] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 11:58:43,070 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 11:58:43,079 INFO [Listener at localhost/35527] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 78) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:38451 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/35527 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:38451 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:38451 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:38451 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (498694620) connection to localhost/127.0.0.1:38451 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=460 (was 472), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=40 (was 86), ProcessCount=169 (was 169), AvailableMemoryMB=3648 (was 4022) 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=40, ProcessCount=169, AvailableMemoryMB=3648 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/hadoop.log.dir so I do NOT create it in target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1f65652d-ed96-125a-5835-ac40e0c004ed/hadoop.tmp.dir so I do NOT create it in target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958, deleteOnExit=true 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/test.cache.data in system properties and HBase conf 2023-05-27 11:58:43,088 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/hadoop.log.dir in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 11:58:43,089 DEBUG [Listener at localhost/35527] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 11:58:43,089 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/nfs.dump.dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/java.io.tmpdir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 11:58:43,090 INFO [Listener at localhost/35527] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 11:58:43,092 WARN [Listener at localhost/35527] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:58:43,094 WARN [Listener at localhost/35527] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:58:43,095 WARN [Listener at localhost/35527] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:58:43,137 WARN [Listener at localhost/35527] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:43,140 INFO [Listener at localhost/35527] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:43,144 INFO [Listener at localhost/35527] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/java.io.tmpdir/Jetty_localhost_40635_hdfs____.8uarsq/webapp 2023-05-27 11:58:43,237 INFO [Listener at localhost/35527] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40635 2023-05-27 11:58:43,239 WARN [Listener at localhost/35527] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:58:43,241 WARN [Listener at localhost/35527] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:58:43,242 WARN [Listener at localhost/35527] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:58:43,283 WARN [Listener at localhost/33035] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:43,292 WARN [Listener at localhost/33035] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:58:43,294 WARN [Listener at localhost/33035] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:43,295 INFO [Listener at localhost/33035] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:43,299 INFO [Listener at localhost/33035] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/java.io.tmpdir/Jetty_localhost_39321_datanode____.w0c1al/webapp 2023-05-27 11:58:43,388 INFO [Listener at localhost/33035] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39321 2023-05-27 11:58:43,394 WARN [Listener at localhost/40327] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:43,405 WARN [Listener at localhost/40327] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:58:43,407 WARN [Listener at localhost/40327] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:58:43,408 INFO [Listener at localhost/40327] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:58:43,414 INFO [Listener at localhost/40327] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/java.io.tmpdir/Jetty_localhost_37919_datanode____wzc2hw/webapp 2023-05-27 11:58:43,481 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1feb6232ffb15666: Processing first storage report for DS-ec21d720-fc0f-40de-a7f6-2a628341f03e from datanode 58c91519-83be-4fc7-b0b2-b251b5035b91 2023-05-27 11:58:43,481 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1feb6232ffb15666: from storage DS-ec21d720-fc0f-40de-a7f6-2a628341f03e node DatanodeRegistration(127.0.0.1:34901, datanodeUuid=58c91519-83be-4fc7-b0b2-b251b5035b91, infoPort=40815, infoSecurePort=0, ipcPort=40327, storageInfo=lv=-57;cid=testClusterID;nsid=108225198;c=1685188723097), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:43,481 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1feb6232ffb15666: Processing first storage report for DS-1974f51b-4f39-4cfa-80e7-4977d9467a44 from datanode 58c91519-83be-4fc7-b0b2-b251b5035b91 2023-05-27 11:58:43,481 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1feb6232ffb15666: from storage DS-1974f51b-4f39-4cfa-80e7-4977d9467a44 node DatanodeRegistration(127.0.0.1:34901, datanodeUuid=58c91519-83be-4fc7-b0b2-b251b5035b91, infoPort=40815, infoSecurePort=0, ipcPort=40327, storageInfo=lv=-57;cid=testClusterID;nsid=108225198;c=1685188723097), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:43,510 INFO [Listener at localhost/40327] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37919 2023-05-27 11:58:43,518 WARN [Listener at localhost/44023] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:58:43,613 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x64451d68fe36a8fc: Processing first storage report for DS-6990e6da-ce8a-4552-886e-26889f827082 from datanode 53b14727-60e4-4d70-8ccd-bb2e5d580c8f 2023-05-27 11:58:43,613 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x64451d68fe36a8fc: from storage DS-6990e6da-ce8a-4552-886e-26889f827082 node DatanodeRegistration(127.0.0.1:41839, datanodeUuid=53b14727-60e4-4d70-8ccd-bb2e5d580c8f, infoPort=37225, infoSecurePort=0, ipcPort=44023, storageInfo=lv=-57;cid=testClusterID;nsid=108225198;c=1685188723097), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:43,613 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x64451d68fe36a8fc: Processing first storage report for DS-b6a1e5a2-b5c2-416b-a36a-59cd60b6ff09 from datanode 53b14727-60e4-4d70-8ccd-bb2e5d580c8f 2023-05-27 11:58:43,613 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x64451d68fe36a8fc: from storage DS-b6a1e5a2-b5c2-416b-a36a-59cd60b6ff09 node DatanodeRegistration(127.0.0.1:41839, datanodeUuid=53b14727-60e4-4d70-8ccd-bb2e5d580c8f, infoPort=37225, infoSecurePort=0, ipcPort=44023, storageInfo=lv=-57;cid=testClusterID;nsid=108225198;c=1685188723097), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:58:43,625 DEBUG [Listener at localhost/44023] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7 2023-05-27 11:58:43,627 INFO [Listener at localhost/44023] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/zookeeper_0, clientPort=62748, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 11:58:43,628 INFO [Listener at localhost/44023] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62748 2023-05-27 11:58:43,628 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,629 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,640 INFO [Listener at localhost/44023] util.FSUtils(471): Created version file at hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52 with version=8 2023-05-27 11:58:43,640 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase-staging 2023-05-27 11:58:43,642 INFO [Listener at localhost/44023] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:58:43,642 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:58:43,642 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:58:43,642 INFO [Listener at localhost/44023] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:58:43,643 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:58:43,643 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:58:43,643 INFO [Listener at localhost/44023] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:58:43,644 INFO [Listener at localhost/44023] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35547 2023-05-27 11:58:43,644 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,645 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,646 INFO [Listener at localhost/44023] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35547 connecting to ZooKeeper ensemble=127.0.0.1:62748 2023-05-27 11:58:43,653 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:355470x0, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:58:43,653 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35547-0x1006c81c6ad0000 connected 2023-05-27 11:58:43,666 DEBUG [Listener at localhost/44023] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:58:43,666 DEBUG [Listener at localhost/44023] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:58:43,667 DEBUG [Listener at localhost/44023] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:58:43,668 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35547 2023-05-27 11:58:43,668 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35547 2023-05-27 11:58:43,668 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35547 2023-05-27 11:58:43,668 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35547 2023-05-27 11:58:43,669 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35547 2023-05-27 11:58:43,669 INFO [Listener at localhost/44023] master.HMaster(444): hbase.rootdir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52, hbase.cluster.distributed=false 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:58:43,681 INFO [Listener at localhost/44023] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:58:43,683 INFO [Listener at localhost/44023] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34733 2023-05-27 11:58:43,683 INFO [Listener at localhost/44023] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 11:58:43,683 DEBUG [Listener at localhost/44023] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 11:58:43,684 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,685 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,686 INFO [Listener at localhost/44023] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34733 connecting to ZooKeeper ensemble=127.0.0.1:62748 2023-05-27 11:58:43,688 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:347330x0, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:58:43,689 DEBUG [Listener at localhost/44023] zookeeper.ZKUtil(164): regionserver:347330x0, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:58:43,690 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34733-0x1006c81c6ad0001 connected 2023-05-27 11:58:43,690 DEBUG [Listener at localhost/44023] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:58:43,691 DEBUG [Listener at localhost/44023] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:58:43,694 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34733 2023-05-27 11:58:43,695 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34733 2023-05-27 11:58:43,695 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34733 2023-05-27 11:58:43,696 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34733 2023-05-27 11:58:43,696 DEBUG [Listener at localhost/44023] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34733 2023-05-27 11:58:43,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,698 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:58:43,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,700 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:58:43,700 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:58:43,700 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:43,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:58:43,701 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35547,1685188723642 from backup master directory 2023-05-27 11:58:43,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:58:43,704 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,704 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:58:43,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,704 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:58:43,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/hbase.id with ID: e97375ca-1a74-4c90-ae8d-564be61bed8e 2023-05-27 11:58:43,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:43,728 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:43,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1b089c80 to 127.0.0.1:62748 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:58:43,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ec48c1d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:58:43,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:58:43,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 11:58:43,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:58:43,743 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store-tmp 2023-05-27 11:58:43,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:43,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:58:43,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:43,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:43,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:58:43,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:43,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:58:43,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:58:43,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/WALs/jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35547%2C1685188723642, suffix=, logDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/WALs/jenkins-hbase4.apache.org,35547,1685188723642, archiveDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/oldWALs, maxLogs=10 2023-05-27 11:58:43,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/WALs/jenkins-hbase4.apache.org,35547,1685188723642/jenkins-hbase4.apache.org%2C35547%2C1685188723642.1685188723753 2023-05-27 11:58:43,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34901,DS-ec21d720-fc0f-40de-a7f6-2a628341f03e,DISK], DatanodeInfoWithStorage[127.0.0.1:41839,DS-6990e6da-ce8a-4552-886e-26889f827082,DISK]] 2023-05-27 11:58:43,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:58:43,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:43,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:58:43,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:58:43,760 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:58:43,761 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 11:58:43,761 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 11:58:43,762 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:43,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:58:43,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:58:43,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:58:43,768 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:58:43,768 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=796237, jitterRate=0.012468323111534119}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:58:43,768 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:58:43,768 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 11:58:43,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 11:58:43,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 11:58:43,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 11:58:43,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 11:58:43,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 11:58:43,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 11:58:43,773 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 11:58:43,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 11:58:43,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 11:58:43,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 11:58:43,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 11:58:43,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 11:58:43,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 11:58:43,788 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:43,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 11:58:43,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 11:58:43,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 11:58:43,792 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:58:43,792 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:58:43,792 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:43,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35547,1685188723642, sessionid=0x1006c81c6ad0000, setting cluster-up flag (Was=false) 2023-05-27 11:58:43,795 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:43,800 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 11:58:43,800 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,802 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:43,806 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 11:58:43,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:43,808 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.hbase-snapshot/.tmp 2023-05-27 11:58:43,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 11:58:43,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:58:43,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:58:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:58:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:58:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 11:58:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:58:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685188753815 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 11:58:43,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 11:58:43,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 11:58:43,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 11:58:43,816 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:58:43,816 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 11:58:43,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 11:58:43,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 11:58:43,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188723816,5,FailOnTimeoutGroup] 2023-05-27 11:58:43,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188723817,5,FailOnTimeoutGroup] 2023-05-27 11:58:43,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 11:58:43,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,818 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:58:43,828 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:58:43,828 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:58:43,829 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52 2023-05-27 11:58:43,836 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:43,837 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:58:43,838 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/info 2023-05-27 11:58:43,839 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:58:43,839 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:43,840 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:58:43,841 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:58:43,841 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:58:43,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:43,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:58:43,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/table 2023-05-27 11:58:43,843 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:58:43,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:43,844 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740 2023-05-27 11:58:43,845 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740 2023-05-27 11:58:43,846 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:58:43,847 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:58:43,849 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:58:43,849 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=717283, jitterRate=-0.08792859315872192}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:58:43,849 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:58:43,850 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:58:43,850 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:58:43,850 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:58:43,850 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:58:43,850 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:58:43,850 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:58:43,850 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:58:43,851 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:58:43,851 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 11:58:43,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 11:58:43,852 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 11:58:43,854 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 11:58:43,898 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(951): ClusterId : e97375ca-1a74-4c90-ae8d-564be61bed8e 2023-05-27 11:58:43,899 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 11:58:43,901 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 11:58:43,901 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 11:58:43,904 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 11:58:43,904 DEBUG [RS:0;jenkins-hbase4:34733] zookeeper.ReadOnlyZKClient(139): Connect 0x36736aed to 127.0.0.1:62748 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:58:43,907 DEBUG [RS:0;jenkins-hbase4:34733] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38b49253, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:58:43,908 DEBUG [RS:0;jenkins-hbase4:34733] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c8b019d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:58:43,916 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34733 2023-05-27 11:58:43,916 INFO [RS:0;jenkins-hbase4:34733] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 11:58:43,916 INFO [RS:0;jenkins-hbase4:34733] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 11:58:43,916 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 11:58:43,917 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35547,1685188723642 with isa=jenkins-hbase4.apache.org/172.31.14.131:34733, startcode=1685188723680 2023-05-27 11:58:43,917 DEBUG [RS:0;jenkins-hbase4:34733] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 11:58:43,921 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55345, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 11:58:43,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:43,922 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52 2023-05-27 11:58:43,922 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33035 2023-05-27 11:58:43,922 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 11:58:43,924 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:58:43,924 DEBUG [RS:0;jenkins-hbase4:34733] zookeeper.ZKUtil(162): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:43,924 WARN [RS:0;jenkins-hbase4:34733] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:58:43,924 INFO [RS:0;jenkins-hbase4:34733] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:58:43,924 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1946): logDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:43,925 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34733,1685188723680] 2023-05-27 11:58:43,928 DEBUG [RS:0;jenkins-hbase4:34733] zookeeper.ZKUtil(162): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:43,929 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 11:58:43,929 INFO [RS:0;jenkins-hbase4:34733] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 11:58:43,930 INFO [RS:0;jenkins-hbase4:34733] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 11:58:43,930 INFO [RS:0;jenkins-hbase4:34733] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:58:43,930 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,930 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 11:58:43,931 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,931 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,932 DEBUG [RS:0;jenkins-hbase4:34733] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:58:43,933 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,933 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,933 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,943 INFO [RS:0;jenkins-hbase4:34733] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 11:58:43,943 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34733,1685188723680-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:43,954 INFO [RS:0;jenkins-hbase4:34733] regionserver.Replication(203): jenkins-hbase4.apache.org,34733,1685188723680 started 2023-05-27 11:58:43,954 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34733,1685188723680, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34733, sessionid=0x1006c81c6ad0001 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34733,1685188723680' 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34733,1685188723680' 2023-05-27 11:58:43,955 DEBUG [RS:0;jenkins-hbase4:34733] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 11:58:43,956 DEBUG [RS:0;jenkins-hbase4:34733] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 11:58:43,956 DEBUG [RS:0;jenkins-hbase4:34733] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 11:58:43,956 INFO [RS:0;jenkins-hbase4:34733] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 11:58:43,956 INFO [RS:0;jenkins-hbase4:34733] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 11:58:44,004 DEBUG [jenkins-hbase4:35547] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 11:58:44,005 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34733,1685188723680, state=OPENING 2023-05-27 11:58:44,006 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 11:58:44,012 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:44,012 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34733,1685188723680}] 2023-05-27 11:58:44,012 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:58:44,058 INFO [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34733%2C1685188723680, suffix=, logDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680, archiveDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/oldWALs, maxLogs=32 2023-05-27 11:58:44,066 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:58:44,067 INFO [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188724058 2023-05-27 11:58:44,068 DEBUG [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34901,DS-ec21d720-fc0f-40de-a7f6-2a628341f03e,DISK], DatanodeInfoWithStorage[127.0.0.1:41839,DS-6990e6da-ce8a-4552-886e-26889f827082,DISK]] 2023-05-27 11:58:44,167 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:44,167 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 11:58:44,169 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 11:58:44,173 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 11:58:44,173 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:58:44,175 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34733%2C1685188723680.meta, suffix=.meta, logDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680, archiveDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/oldWALs, maxLogs=32 2023-05-27 11:58:44,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.meta.1685188724175.meta 2023-05-27 11:58:44,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34901,DS-ec21d720-fc0f-40de-a7f6-2a628341f03e,DISK], DatanodeInfoWithStorage[127.0.0.1:41839,DS-6990e6da-ce8a-4552-886e-26889f827082,DISK]] 2023-05-27 11:58:44,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:58:44,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 11:58:44,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 11:58:44,190 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 11:58:44,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 11:58:44,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:44,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 11:58:44,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 11:58:44,191 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:58:44,192 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/info 2023-05-27 11:58:44,192 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/info 2023-05-27 11:58:44,192 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:58:44,193 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:44,193 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:58:44,194 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:58:44,194 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:58:44,194 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:58:44,195 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:44,195 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:58:44,196 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/table 2023-05-27 11:58:44,196 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/table 2023-05-27 11:58:44,196 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:58:44,197 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:44,198 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740 2023-05-27 11:58:44,199 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740 2023-05-27 11:58:44,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:58:44,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:58:44,203 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=819741, jitterRate=0.042355507612228394}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:58:44,203 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:58:44,206 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685188724166 2023-05-27 11:58:44,210 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 11:58:44,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 11:58:44,212 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34733,1685188723680, state=OPEN 2023-05-27 11:58:44,214 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 11:58:44,214 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:58:44,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 11:58:44,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34733,1685188723680 in 202 msec 2023-05-27 11:58:44,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 11:58:44,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 366 msec 2023-05-27 11:58:44,221 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 412 msec 2023-05-27 11:58:44,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685188724221, completionTime=-1 2023-05-27 11:58:44,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 11:58:44,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 11:58:44,225 DEBUG [hconnection-0x3ca0b54d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:58:44,227 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:58:44,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 11:58:44,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685188784228 2023-05-27 11:58:44,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685188844228 2023-05-27 11:58:44,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-27 11:58:44,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35547,1685188723642-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:44,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35547,1685188723642-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:44,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35547,1685188723642-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:44,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35547, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:44,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 11:58:44,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 11:58:44,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:58:44,235 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 11:58:44,235 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 11:58:44,236 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:58:44,237 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:58:44,239 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,240 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b empty. 2023-05-27 11:58:44,240 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,240 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 11:58:44,257 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 11:58:44,258 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6bbeaf66518da0ed0e27e8cb0b7a8b0b, NAME => 'hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp 2023-05-27 11:58:44,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:44,266 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6bbeaf66518da0ed0e27e8cb0b7a8b0b, disabling compactions & flushes 2023-05-27 11:58:44,266 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,266 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,266 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. after waiting 0 ms 2023-05-27 11:58:44,266 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,266 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,266 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6bbeaf66518da0ed0e27e8cb0b7a8b0b: 2023-05-27 11:58:44,268 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:58:44,269 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188724269"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188724269"}]},"ts":"1685188724269"} 2023-05-27 11:58:44,272 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:58:44,273 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:58:44,273 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188724273"}]},"ts":"1685188724273"} 2023-05-27 11:58:44,274 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 11:58:44,281 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6bbeaf66518da0ed0e27e8cb0b7a8b0b, ASSIGN}] 2023-05-27 11:58:44,283 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6bbeaf66518da0ed0e27e8cb0b7a8b0b, ASSIGN 2023-05-27 11:58:44,284 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6bbeaf66518da0ed0e27e8cb0b7a8b0b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34733,1685188723680; forceNewPlan=false, retain=false 2023-05-27 11:58:44,435 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6bbeaf66518da0ed0e27e8cb0b7a8b0b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:44,436 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188724435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188724435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188724435"}]},"ts":"1685188724435"} 2023-05-27 11:58:44,438 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 6bbeaf66518da0ed0e27e8cb0b7a8b0b, server=jenkins-hbase4.apache.org,34733,1685188723680}] 2023-05-27 11:58:44,594 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6bbeaf66518da0ed0e27e8cb0b7a8b0b, NAME => 'hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:58:44,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:44,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,595 INFO [StoreOpener-6bbeaf66518da0ed0e27e8cb0b7a8b0b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,597 DEBUG [StoreOpener-6bbeaf66518da0ed0e27e8cb0b7a8b0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/info 2023-05-27 11:58:44,597 DEBUG [StoreOpener-6bbeaf66518da0ed0e27e8cb0b7a8b0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/info 2023-05-27 11:58:44,597 INFO [StoreOpener-6bbeaf66518da0ed0e27e8cb0b7a8b0b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6bbeaf66518da0ed0e27e8cb0b7a8b0b columnFamilyName info 2023-05-27 11:58:44,598 INFO [StoreOpener-6bbeaf66518da0ed0e27e8cb0b7a8b0b-1] regionserver.HStore(310): Store=6bbeaf66518da0ed0e27e8cb0b7a8b0b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:44,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:58:44,603 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:58:44,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6bbeaf66518da0ed0e27e8cb0b7a8b0b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=753870, jitterRate=-0.041405051946640015}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:58:44,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6bbeaf66518da0ed0e27e8cb0b7a8b0b: 2023-05-27 11:58:44,606 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b., pid=6, masterSystemTime=1685188724590 2023-05-27 11:58:44,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,608 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:44,608 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6bbeaf66518da0ed0e27e8cb0b7a8b0b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:44,608 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188724608"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188724608"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188724608"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188724608"}]},"ts":"1685188724608"} 2023-05-27 11:58:44,613 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 11:58:44,613 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 6bbeaf66518da0ed0e27e8cb0b7a8b0b, server=jenkins-hbase4.apache.org,34733,1685188723680 in 172 msec 2023-05-27 11:58:44,615 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 11:58:44,616 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6bbeaf66518da0ed0e27e8cb0b7a8b0b, ASSIGN in 332 msec 2023-05-27 11:58:44,616 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:58:44,617 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188724616"}]},"ts":"1685188724616"} 2023-05-27 11:58:44,618 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 11:58:44,621 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:58:44,623 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 387 msec 2023-05-27 11:58:44,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 11:58:44,637 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:58:44,637 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:44,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 11:58:44,651 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:58:44,654 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-27 11:58:44,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 11:58:44,670 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:58:44,674 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-27 11:58:44,687 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 11:58:44,689 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 11:58:44,689 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.985sec 2023-05-27 11:58:44,689 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 11:58:44,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 11:58:44,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 11:58:44,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35547,1685188723642-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 11:58:44,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35547,1685188723642-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 11:58:44,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 11:58:44,698 DEBUG [Listener at localhost/44023] zookeeper.ReadOnlyZKClient(139): Connect 0x53d8a823 to 127.0.0.1:62748 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:58:44,703 DEBUG [Listener at localhost/44023] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44571d57, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:58:44,704 DEBUG [hconnection-0x65382357-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:58:44,706 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50468, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:58:44,708 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:58:44,708 INFO [Listener at localhost/44023] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:58:44,711 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 11:58:44,711 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:58:44,711 INFO [Listener at localhost/44023] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 11:58:44,713 DEBUG [Listener at localhost/44023] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 11:58:44,715 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58044, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 11:58:44,717 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 11:58:44,717 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 11:58:44,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:58:44,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:58:44,720 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:58:44,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-27 11:58:44,721 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:58:44,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:58:44,725 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:44,725 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d empty. 2023-05-27 11:58:44,726 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:44,726 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-27 11:58:44,738 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-27 11:58:44,739 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0dd8ba0dafb3ca7b281bc3852f10ca9d, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/.tmp 2023-05-27 11:58:44,746 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:44,746 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 0dd8ba0dafb3ca7b281bc3852f10ca9d, disabling compactions & flushes 2023-05-27 11:58:44,746 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:44,746 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:44,746 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. after waiting 0 ms 2023-05-27 11:58:44,746 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:44,746 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:44,746 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:58:44,749 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:58:44,750 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685188724749"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188724749"}]},"ts":"1685188724749"} 2023-05-27 11:58:44,751 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:58:44,752 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:58:44,752 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188724752"}]},"ts":"1685188724752"} 2023-05-27 11:58:44,753 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-27 11:58:44,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=0dd8ba0dafb3ca7b281bc3852f10ca9d, ASSIGN}] 2023-05-27 11:58:44,760 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=0dd8ba0dafb3ca7b281bc3852f10ca9d, ASSIGN 2023-05-27 11:58:44,761 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=0dd8ba0dafb3ca7b281bc3852f10ca9d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34733,1685188723680; forceNewPlan=false, retain=false 2023-05-27 11:58:44,912 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0dd8ba0dafb3ca7b281bc3852f10ca9d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:44,912 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685188724912"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188724912"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188724912"}]},"ts":"1685188724912"} 2023-05-27 11:58:44,914 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 0dd8ba0dafb3ca7b281bc3852f10ca9d, server=jenkins-hbase4.apache.org,34733,1685188723680}] 2023-05-27 11:58:45,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:45,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0dd8ba0dafb3ca7b281bc3852f10ca9d, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:58:45,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:58:45,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,070 INFO [StoreOpener-0dd8ba0dafb3ca7b281bc3852f10ca9d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,072 DEBUG [StoreOpener-0dd8ba0dafb3ca7b281bc3852f10ca9d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info 2023-05-27 11:58:45,072 DEBUG [StoreOpener-0dd8ba0dafb3ca7b281bc3852f10ca9d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info 2023-05-27 11:58:45,072 INFO [StoreOpener-0dd8ba0dafb3ca7b281bc3852f10ca9d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0dd8ba0dafb3ca7b281bc3852f10ca9d columnFamilyName info 2023-05-27 11:58:45,073 INFO [StoreOpener-0dd8ba0dafb3ca7b281bc3852f10ca9d-1] regionserver.HStore(310): Store=0dd8ba0dafb3ca7b281bc3852f10ca9d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:58:45,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:58:45,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:58:45,080 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0dd8ba0dafb3ca7b281bc3852f10ca9d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=760811, jitterRate=-0.03257909417152405}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:58:45,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:58:45,080 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d., pid=11, masterSystemTime=1685188725065 2023-05-27 11:58:45,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:45,082 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:45,083 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0dd8ba0dafb3ca7b281bc3852f10ca9d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:45,083 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685188725083"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188725083"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188725083"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188725083"}]},"ts":"1685188725083"} 2023-05-27 11:58:45,087 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 11:58:45,087 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 0dd8ba0dafb3ca7b281bc3852f10ca9d, server=jenkins-hbase4.apache.org,34733,1685188723680 in 171 msec 2023-05-27 11:58:45,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 11:58:45,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=0dd8ba0dafb3ca7b281bc3852f10ca9d, ASSIGN in 328 msec 2023-05-27 11:58:45,090 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:58:45,090 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188725090"}]},"ts":"1685188725090"} 2023-05-27 11:58:45,092 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-27 11:58:45,094 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:58:45,096 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 377 msec 2023-05-27 11:58:49,738 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 11:58:49,929 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:58:54,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:58:54,722 INFO [Listener at localhost/44023] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-27 11:58:54,725 DEBUG [Listener at localhost/44023] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:58:54,725 DEBUG [Listener at localhost/44023] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:58:54,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 11:58:54,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-27 11:58:54,744 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-27 11:58:54,744 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:58:54,745 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-27 11:58:54,745 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-27 11:58:54,745 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,745 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 11:58:54,747 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:58:54,747 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,747 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:58:54,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:58:54,747 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,747 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 11:58:54,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 11:58:54,747 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,748 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 11:58:54,748 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 11:58:54,748 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-27 11:58:54,750 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-27 11:58:54,750 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-27 11:58:54,750 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:58:54,751 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-27 11:58:54,751 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 11:58:54,752 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 11:58:54,752 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:54,752 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. started... 2023-05-27 11:58:54,752 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 6bbeaf66518da0ed0e27e8cb0b7a8b0b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 11:58:54,763 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/.tmp/info/0daba9f7b31443eea1b5e99288fa2737 2023-05-27 11:58:54,769 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/.tmp/info/0daba9f7b31443eea1b5e99288fa2737 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/info/0daba9f7b31443eea1b5e99288fa2737 2023-05-27 11:58:54,775 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/info/0daba9f7b31443eea1b5e99288fa2737, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 11:58:54,776 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6bbeaf66518da0ed0e27e8cb0b7a8b0b in 24ms, sequenceid=6, compaction requested=false 2023-05-27 11:58:54,776 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 6bbeaf66518da0ed0e27e8cb0b7a8b0b: 2023-05-27 11:58:54,776 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:58:54,776 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 11:58:54,776 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 11:58:54,776 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,776 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-27 11:58:54,776 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-27 11:58:54,778 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,778 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:58:54,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:58:54,779 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,779 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 11:58:54,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:58:54,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:58:54,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 11:58:54,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:58:54,780 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-27 11:58:54,781 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-27 11:58:54,781 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5e8e2f34[Count = 0] remaining members to acquire global barrier 2023-05-27 11:58:54,781 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,783 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,783 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,783 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,783 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-27 11:58:54,783 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-27 11:58:54,783 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,783 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 11:58:54,783 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,34733,1685188723680' in zk 2023-05-27 11:58:54,785 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,785 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-27 11:58:54,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:58:54,785 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:58:54,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:58:54,785 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-27 11:58:54,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:58:54,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:58:54,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 11:58:54,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:58:54,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 11:58:54,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,788 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,34733,1685188723680': 2023-05-27 11:58:54,788 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-27 11:58:54,788 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-27 11:58:54,788 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 11:58:54,788 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 11:58:54,788 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-27 11:58:54,788 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 11:58:54,790 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,790 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:58:54,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:58:54,790 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,790 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:58:54,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:58:54,790 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,790 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:58:54,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:58:54,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 11:58:54,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:58:54,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 11:58:54,792 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:58:54,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 11:58:54,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,798 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:58:54,798 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,798 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:58:54,798 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:58:54,798 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 11:58:54,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-27 11:58:54,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 11:58:54,798 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 11:58:54,798 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:58:54,798 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:58:54,799 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:58:54,799 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,799 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 11:58:54,799 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 11:58:54,799 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:58:54,799 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:58:54,801 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-27 11:58:54,801 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 11:59:04,801 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 11:59:04,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 11:59:04,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 11:59:04,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,817 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:04,817 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:04,818 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 11:59:04,818 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 11:59:04,819 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,819 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,821 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,821 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:04,821 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:04,821 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:04,821 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,821 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 11:59:04,821 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,821 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,822 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 11:59:04,822 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,822 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,822 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,822 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 11:59:04,822 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:04,823 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 11:59:04,823 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 11:59:04,823 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 11:59:04,823 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:04,823 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. started... 2023-05-27 11:59:04,823 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 0dd8ba0dafb3ca7b281bc3852f10ca9d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 11:59:04,835 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/b9ed4ba8527e44a9aef5b8a3fbaa681b 2023-05-27 11:59:04,842 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/b9ed4ba8527e44a9aef5b8a3fbaa681b as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/b9ed4ba8527e44a9aef5b8a3fbaa681b 2023-05-27 11:59:04,848 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/b9ed4ba8527e44a9aef5b8a3fbaa681b, entries=1, sequenceid=5, filesize=5.8 K 2023-05-27 11:59:04,848 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 0dd8ba0dafb3ca7b281bc3852f10ca9d in 25ms, sequenceid=5, compaction requested=false 2023-05-27 11:59:04,849 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:59:04,849 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:04,849 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 11:59:04,849 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 11:59:04,849 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,849 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 11:59:04,849 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 11:59:04,851 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,851 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:04,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:04,851 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,851 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 11:59:04,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:04,852 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:04,852 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,852 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,852 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:04,853 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 11:59:04,853 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5ab1a03d[Count = 0] remaining members to acquire global barrier 2023-05-27 11:59:04,853 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 11:59:04,853 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,854 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,854 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,854 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 11:59:04,854 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 11:59:04,854 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,854 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 11:59:04,854 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,34733,1685188723680' in zk 2023-05-27 11:59:04,856 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 11:59:04,856 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,856 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:04,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:04,856 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 11:59:04,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:04,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:04,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:04,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:04,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,34733,1685188723680': 2023-05-27 11:59:04,859 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 11:59:04,859 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 11:59:04,859 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 11:59:04,859 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 11:59:04,859 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,859 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 11:59:04,863 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,863 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,863 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:04,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:04,863 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:04,864 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:04,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:04,864 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:04,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:04,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,865 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:04,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:04,868 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:04,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:04,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,868 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:04,869 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-27 11:59:04,868 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 11:59:04,869 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:04,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:04,869 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:04,869 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 11:59:04,869 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 11:59:14,870 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 11:59:14,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 11:59:14,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 11:59:14,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-27 11:59:14,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,880 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:14,880 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:14,880 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 11:59:14,880 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 11:59:14,881 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,881 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,882 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:14,882 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,882 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:14,882 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:14,882 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,883 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 11:59:14,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,883 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 11:59:14,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,883 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,883 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-27 11:59:14,883 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,883 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 11:59:14,884 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:14,884 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 11:59:14,884 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 11:59:14,884 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 11:59:14,884 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:14,884 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. started... 2023-05-27 11:59:14,884 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 0dd8ba0dafb3ca7b281bc3852f10ca9d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 11:59:14,895 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/76f2afcd767a4aa7ad4314fa33f51cb6 2023-05-27 11:59:14,902 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/76f2afcd767a4aa7ad4314fa33f51cb6 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/76f2afcd767a4aa7ad4314fa33f51cb6 2023-05-27 11:59:14,908 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/76f2afcd767a4aa7ad4314fa33f51cb6, entries=1, sequenceid=9, filesize=5.8 K 2023-05-27 11:59:14,909 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 0dd8ba0dafb3ca7b281bc3852f10ca9d in 25ms, sequenceid=9, compaction requested=false 2023-05-27 11:59:14,909 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:59:14,909 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:14,909 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 11:59:14,909 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 11:59:14,909 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,909 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 11:59:14,909 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 11:59:14,911 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,911 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:14,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:14,912 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,912 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 11:59:14,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:14,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:14,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:14,913 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 11:59:14,913 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@1fb3250f[Count = 0] remaining members to acquire global barrier 2023-05-27 11:59:14,913 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 11:59:14,913 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,914 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,914 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,914 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 11:59:14,914 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 11:59:14,915 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,34733,1685188723680' in zk 2023-05-27 11:59:14,915 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,915 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 11:59:14,917 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 11:59:14,917 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,917 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:14,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:14,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:14,917 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 11:59:14,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:14,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:14,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:14,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,34733,1685188723680': 2023-05-27 11:59:14,920 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 11:59:14,920 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 11:59:14,920 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 11:59:14,920 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 11:59:14,920 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,920 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 11:59:14,921 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,921 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,921 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:14,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:14,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:14,921 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:14,922 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:14,922 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:14,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:14,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,923 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:14,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,927 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,927 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:14,927 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,927 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:14,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-27 11:59:14,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:14,927 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:14,927 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:14,927 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 11:59:14,927 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:14,927 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:14,928 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,928 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,928 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:14,928 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 11:59:14,928 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 11:59:14,928 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:14,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:24,928 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 11:59:24,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 11:59:24,943 INFO [Listener at localhost/44023] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188724058 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188764931 2023-05-27 11:59:24,943 DEBUG [Listener at localhost/44023] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34901,DS-ec21d720-fc0f-40de-a7f6-2a628341f03e,DISK], DatanodeInfoWithStorage[127.0.0.1:41839,DS-6990e6da-ce8a-4552-886e-26889f827082,DISK]] 2023-05-27 11:59:24,943 DEBUG [Listener at localhost/44023] wal.AbstractFSWAL(716): hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188724058 is not closed yet, will try archiving it next time 2023-05-27 11:59:24,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 11:59:24,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-27 11:59:24,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,951 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:24,951 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:24,951 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 11:59:24,951 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 11:59:24,952 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,952 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,953 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,953 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:24,953 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:24,953 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:24,953 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,953 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 11:59:24,953 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,953 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,954 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 11:59:24,954 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,954 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,954 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-27 11:59:24,954 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,954 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 11:59:24,954 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:24,954 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 11:59:24,955 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 11:59:24,955 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 11:59:24,955 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:24,955 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. started... 2023-05-27 11:59:24,955 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 0dd8ba0dafb3ca7b281bc3852f10ca9d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 11:59:24,966 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/dda0ca86daff490e9fb3efcad6b1ad1e 2023-05-27 11:59:24,972 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/dda0ca86daff490e9fb3efcad6b1ad1e as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/dda0ca86daff490e9fb3efcad6b1ad1e 2023-05-27 11:59:24,977 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/dda0ca86daff490e9fb3efcad6b1ad1e, entries=1, sequenceid=13, filesize=5.8 K 2023-05-27 11:59:24,978 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 0dd8ba0dafb3ca7b281bc3852f10ca9d in 23ms, sequenceid=13, compaction requested=true 2023-05-27 11:59:24,978 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:59:24,978 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:24,978 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 11:59:24,979 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 11:59:24,979 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,979 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 11:59:24,979 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 11:59:24,980 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,981 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,981 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,981 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:24,981 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:24,981 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,981 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 11:59:24,981 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:24,981 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:24,982 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,982 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,982 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:24,982 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 11:59:24,982 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@13e9617[Count = 0] remaining members to acquire global barrier 2023-05-27 11:59:24,982 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 11:59:24,982 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,983 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,983 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,984 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 11:59:24,984 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 11:59:24,984 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,34733,1685188723680' in zk 2023-05-27 11:59:24,984 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,984 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 11:59:24,991 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,991 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 11:59:24,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:24,991 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:24,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:24,991 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 11:59:24,992 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:24,992 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:24,993 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,993 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,993 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:24,993 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,34733,1685188723680': 2023-05-27 11:59:24,994 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 11:59:24,994 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 11:59:24,994 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 11:59:24,994 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 11:59:24,994 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,994 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 11:59:24,996 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,996 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:24,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:24,996 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,996 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:24,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:24,996 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:24,996 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:24,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:24,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:24,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:24,999 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:25,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:25,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:25,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:25,001 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:25,003 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:25,003 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:25,003 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:25,003 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:25,003 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:25,003 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:25,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:25,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 11:59:25,004 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:25,004 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-27 11:59:25,003 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:25,004 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:25,004 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:25,004 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 11:59:25,004 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:25,004 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:25,004 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:25,004 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 11:59:25,004 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,004 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 11:59:35,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 11:59:35,006 DEBUG [Listener at localhost/44023] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 11:59:35,011 DEBUG [Listener at localhost/44023] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 11:59:35,011 DEBUG [Listener at localhost/44023] regionserver.HStore(1912): 0dd8ba0dafb3ca7b281bc3852f10ca9d/info is initiating minor compaction (all files) 2023-05-27 11:59:35,011 INFO [Listener at localhost/44023] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:59:35,011 INFO [Listener at localhost/44023] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:35,011 INFO [Listener at localhost/44023] regionserver.HRegion(2259): Starting compaction of 0dd8ba0dafb3ca7b281bc3852f10ca9d/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:35,012 INFO [Listener at localhost/44023] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/b9ed4ba8527e44a9aef5b8a3fbaa681b, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/76f2afcd767a4aa7ad4314fa33f51cb6, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/dda0ca86daff490e9fb3efcad6b1ad1e] into tmpdir=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp, totalSize=17.4 K 2023-05-27 11:59:35,012 DEBUG [Listener at localhost/44023] compactions.Compactor(207): Compacting b9ed4ba8527e44a9aef5b8a3fbaa681b, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685188744811 2023-05-27 11:59:35,013 DEBUG [Listener at localhost/44023] compactions.Compactor(207): Compacting 76f2afcd767a4aa7ad4314fa33f51cb6, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685188754872 2023-05-27 11:59:35,013 DEBUG [Listener at localhost/44023] compactions.Compactor(207): Compacting dda0ca86daff490e9fb3efcad6b1ad1e, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685188764930 2023-05-27 11:59:35,027 INFO [Listener at localhost/44023] throttle.PressureAwareThroughputController(145): 0dd8ba0dafb3ca7b281bc3852f10ca9d#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 11:59:35,050 DEBUG [Listener at localhost/44023] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/9a135b01b67f46788284b2f3c2aef063 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/9a135b01b67f46788284b2f3c2aef063 2023-05-27 11:59:35,057 INFO [Listener at localhost/44023] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 0dd8ba0dafb3ca7b281bc3852f10ca9d/info of 0dd8ba0dafb3ca7b281bc3852f10ca9d into 9a135b01b67f46788284b2f3c2aef063(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 11:59:35,057 DEBUG [Listener at localhost/44023] regionserver.HRegion(2289): Compaction status journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:59:35,067 INFO [Listener at localhost/44023] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188764931 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188775059 2023-05-27 11:59:35,068 DEBUG [Listener at localhost/44023] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34901,DS-ec21d720-fc0f-40de-a7f6-2a628341f03e,DISK], DatanodeInfoWithStorage[127.0.0.1:41839,DS-6990e6da-ce8a-4552-886e-26889f827082,DISK]] 2023-05-27 11:59:35,068 DEBUG [Listener at localhost/44023] wal.AbstractFSWAL(716): hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188764931 is not closed yet, will try archiving it next time 2023-05-27 11:59:35,068 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188724058 to hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/oldWALs/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188724058 2023-05-27 11:59:35,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 11:59:35,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-27 11:59:35,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,075 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:35,075 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:35,075 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 11:59:35,075 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 11:59:35,076 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,076 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,080 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,080 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:35,080 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:35,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:35,080 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,080 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 11:59:35,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,081 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 11:59:35,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,081 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,081 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-27 11:59:35,081 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,081 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 11:59:35,081 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 11:59:35,082 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 11:59:35,082 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 11:59:35,082 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 11:59:35,082 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:35,082 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. started... 2023-05-27 11:59:35,082 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 0dd8ba0dafb3ca7b281bc3852f10ca9d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 11:59:35,095 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/87aff6680cb24b65a27e2ca3a3283bb6 2023-05-27 11:59:35,101 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/87aff6680cb24b65a27e2ca3a3283bb6 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/87aff6680cb24b65a27e2ca3a3283bb6 2023-05-27 11:59:35,105 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/87aff6680cb24b65a27e2ca3a3283bb6, entries=1, sequenceid=18, filesize=5.8 K 2023-05-27 11:59:35,106 INFO [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 0dd8ba0dafb3ca7b281bc3852f10ca9d in 24ms, sequenceid=18, compaction requested=false 2023-05-27 11:59:35,107 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:59:35,107 DEBUG [rs(jenkins-hbase4.apache.org,34733,1685188723680)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:35,107 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 11:59:35,107 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 11:59:35,107 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,107 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 11:59:35,107 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 11:59:35,109 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,109 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,109 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,109 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:35,109 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:35,109 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,109 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 11:59:35,109 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:35,109 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:35,110 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,110 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,110 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:35,111 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,34733,1685188723680' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 11:59:35,111 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@2ee6fe8b[Count = 0] remaining members to acquire global barrier 2023-05-27 11:59:35,111 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 11:59:35,111 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,112 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,112 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,112 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,112 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 11:59:35,112 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,112 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 11:59:35,112 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 11:59:35,112 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,34733,1685188723680' in zk 2023-05-27 11:59:35,115 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 11:59:35,115 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,115 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:35,115 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,116 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:35,116 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:35,115 DEBUG [member: 'jenkins-hbase4.apache.org,34733,1685188723680' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 11:59:35,116 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:35,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:35,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:35,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,34733,1685188723680': 2023-05-27 11:59:35,119 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,34733,1685188723680' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 11:59:35,119 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 11:59:35,119 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 11:59:35,119 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 11:59:35,119 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,119 INFO [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 11:59:35,120 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,120 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,120 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,120 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 11:59:35,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 11:59:35,120 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,120 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:35,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 11:59:35,121 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:35,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:35,121 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,122 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,122 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 11:59:35,122 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,123 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 11:59:35,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,127 DEBUG [(jenkins-hbase4.apache.org,35547,1685188723642)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,127 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-27 11:59:35,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 11:59:35,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 11:59:35,127 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 11:59:35,127 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:35,128 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 11:59:35,128 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 11:59:35,128 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 11:59:35,128 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 11:59:35,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:45,128 DEBUG [Listener at localhost/44023] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 11:59:45,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35547] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 11:59:45,139 INFO [Listener at localhost/44023] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188775059 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188785131 2023-05-27 11:59:45,139 DEBUG [Listener at localhost/44023] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41839,DS-6990e6da-ce8a-4552-886e-26889f827082,DISK], DatanodeInfoWithStorage[127.0.0.1:34901,DS-ec21d720-fc0f-40de-a7f6-2a628341f03e,DISK]] 2023-05-27 11:59:45,139 DEBUG [Listener at localhost/44023] wal.AbstractFSWAL(716): hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188775059 is not closed yet, will try archiving it next time 2023-05-27 11:59:45,139 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 11:59:45,140 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188764931 to hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/oldWALs/jenkins-hbase4.apache.org%2C34733%2C1685188723680.1685188764931 2023-05-27 11:59:45,140 INFO [Listener at localhost/44023] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 11:59:45,140 DEBUG [Listener at localhost/44023] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x53d8a823 to 127.0.0.1:62748 2023-05-27 11:59:45,141 DEBUG [Listener at localhost/44023] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:59:45,141 DEBUG [Listener at localhost/44023] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 11:59:45,141 DEBUG [Listener at localhost/44023] util.JVMClusterUtil(257): Found active master hash=1942335466, stopped=false 2023-05-27 11:59:45,141 INFO [Listener at localhost/44023] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:59:45,143 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:59:45,143 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 11:59:45,143 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:45,146 INFO [Listener at localhost/44023] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 11:59:45,146 DEBUG [Listener at localhost/44023] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b089c80 to 127.0.0.1:62748 2023-05-27 11:59:45,146 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:59:45,146 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1064): Closing user regions 2023-05-27 11:59:45,146 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:59:45,146 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(3303): Received CLOSE for 0dd8ba0dafb3ca7b281bc3852f10ca9d 2023-05-27 11:59:45,146 DEBUG [Listener at localhost/44023] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:59:45,147 INFO [Listener at localhost/44023] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34733,1685188723680' ***** 2023-05-27 11:59:45,147 INFO [Listener at localhost/44023] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 11:59:45,147 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(3303): Received CLOSE for 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:59:45,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0dd8ba0dafb3ca7b281bc3852f10ca9d, disabling compactions & flushes 2023-05-27 11:59:45,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:45,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:45,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. after waiting 0 ms 2023-05-27 11:59:45,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:45,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0dd8ba0dafb3ca7b281bc3852f10ca9d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 11:59:45,148 INFO [RS:0;jenkins-hbase4:34733] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 11:59:45,149 INFO [RS:0;jenkins-hbase4:34733] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 11:59:45,149 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 11:59:45,149 INFO [RS:0;jenkins-hbase4:34733] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 11:59:45,149 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(3305): Received CLOSE for the region: 6bbeaf66518da0ed0e27e8cb0b7a8b0b, which we are already trying to CLOSE, but not completed yet 2023-05-27 11:59:45,149 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:45,149 DEBUG [RS:0;jenkins-hbase4:34733] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x36736aed to 127.0.0.1:62748 2023-05-27 11:59:45,149 DEBUG [RS:0;jenkins-hbase4:34733] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:59:45,149 INFO [RS:0;jenkins-hbase4:34733] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 11:59:45,150 INFO [RS:0;jenkins-hbase4:34733] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 11:59:45,150 INFO [RS:0;jenkins-hbase4:34733] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 11:59:45,150 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 11:59:45,150 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-27 11:59:45,150 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 0dd8ba0dafb3ca7b281bc3852f10ca9d=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d., 6bbeaf66518da0ed0e27e8cb0b7a8b0b=hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b.} 2023-05-27 11:59:45,151 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:59:45,151 DEBUG [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1504): Waiting on 0dd8ba0dafb3ca7b281bc3852f10ca9d, 1588230740, 6bbeaf66518da0ed0e27e8cb0b7a8b0b 2023-05-27 11:59:45,151 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:59:45,151 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:59:45,151 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:59:45,151 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:59:45,151 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-27 11:59:45,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/8d53bbacce264ad9bcd443a0bd60c919 2023-05-27 11:59:45,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/.tmp/info/1eb7a6634c2748e0acb748d50673646f 2023-05-27 11:59:45,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/.tmp/info/8d53bbacce264ad9bcd443a0bd60c919 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/8d53bbacce264ad9bcd443a0bd60c919 2023-05-27 11:59:45,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/8d53bbacce264ad9bcd443a0bd60c919, entries=1, sequenceid=22, filesize=5.8 K 2023-05-27 11:59:45,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 0dd8ba0dafb3ca7b281bc3852f10ca9d in 83ms, sequenceid=22, compaction requested=true 2023-05-27 11:59:45,240 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/b9ed4ba8527e44a9aef5b8a3fbaa681b, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/76f2afcd767a4aa7ad4314fa33f51cb6, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/dda0ca86daff490e9fb3efcad6b1ad1e] to archive 2023-05-27 11:59:45,244 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/.tmp/table/4409160eca604a1fab691983cf9ac526 2023-05-27 11:59:45,244 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 11:59:45,248 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/b9ed4ba8527e44a9aef5b8a3fbaa681b to hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/b9ed4ba8527e44a9aef5b8a3fbaa681b 2023-05-27 11:59:45,250 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/76f2afcd767a4aa7ad4314fa33f51cb6 to hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/76f2afcd767a4aa7ad4314fa33f51cb6 2023-05-27 11:59:45,252 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/dda0ca86daff490e9fb3efcad6b1ad1e to hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/info/dda0ca86daff490e9fb3efcad6b1ad1e 2023-05-27 11:59:45,259 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/.tmp/info/1eb7a6634c2748e0acb748d50673646f as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/info/1eb7a6634c2748e0acb748d50673646f 2023-05-27 11:59:45,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/0dd8ba0dafb3ca7b281bc3852f10ca9d/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-27 11:59:45,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:45,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0dd8ba0dafb3ca7b281bc3852f10ca9d: 2023-05-27 11:59:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685188724717.0dd8ba0dafb3ca7b281bc3852f10ca9d. 2023-05-27 11:59:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6bbeaf66518da0ed0e27e8cb0b7a8b0b, disabling compactions & flushes 2023-05-27 11:59:45,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:59:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:59:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. after waiting 0 ms 2023-05-27 11:59:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:59:45,267 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/info/1eb7a6634c2748e0acb748d50673646f, entries=20, sequenceid=14, filesize=7.6 K 2023-05-27 11:59:45,268 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/.tmp/table/4409160eca604a1fab691983cf9ac526 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/table/4409160eca604a1fab691983cf9ac526 2023-05-27 11:59:45,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/namespace/6bbeaf66518da0ed0e27e8cb0b7a8b0b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 11:59:45,270 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:59:45,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6bbeaf66518da0ed0e27e8cb0b7a8b0b: 2023-05-27 11:59:45,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685188724234.6bbeaf66518da0ed0e27e8cb0b7a8b0b. 2023-05-27 11:59:45,275 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/table/4409160eca604a1fab691983cf9ac526, entries=4, sequenceid=14, filesize=4.9 K 2023-05-27 11:59:45,275 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 124ms, sequenceid=14, compaction requested=false 2023-05-27 11:59:45,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-27 11:59:45,282 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 11:59:45,282 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:59:45,282 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:59:45,282 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 11:59:45,351 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34733,1685188723680; all regions closed. 2023-05-27 11:59:45,351 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:45,357 DEBUG [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/oldWALs 2023-05-27 11:59:45,357 INFO [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C34733%2C1685188723680.meta:.meta(num 1685188724175) 2023-05-27 11:59:45,357 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/WALs/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:45,362 DEBUG [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/oldWALs 2023-05-27 11:59:45,362 INFO [RS:0;jenkins-hbase4:34733] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C34733%2C1685188723680:(num 1685188785131) 2023-05-27 11:59:45,362 DEBUG [RS:0;jenkins-hbase4:34733] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:59:45,362 INFO [RS:0;jenkins-hbase4:34733] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:59:45,363 INFO [RS:0;jenkins-hbase4:34733] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 11:59:45,363 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:59:45,363 INFO [RS:0;jenkins-hbase4:34733] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34733 2023-05-27 11:59:45,366 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34733,1685188723680 2023-05-27 11:59:45,366 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:59:45,366 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:59:45,367 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34733,1685188723680] 2023-05-27 11:59:45,367 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34733,1685188723680; numProcessing=1 2023-05-27 11:59:45,368 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34733,1685188723680 already deleted, retry=false 2023-05-27 11:59:45,368 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34733,1685188723680 expired; onlineServers=0 2023-05-27 11:59:45,368 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35547,1685188723642' ***** 2023-05-27 11:59:45,368 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 11:59:45,369 DEBUG [M:0;jenkins-hbase4:35547] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77fa354a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:59:45,369 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:59:45,369 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35547,1685188723642; all regions closed. 2023-05-27 11:59:45,369 DEBUG [M:0;jenkins-hbase4:35547] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 11:59:45,369 DEBUG [M:0;jenkins-hbase4:35547] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 11:59:45,369 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 11:59:45,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188723816] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188723816,5,FailOnTimeoutGroup] 2023-05-27 11:59:45,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188723817] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188723817,5,FailOnTimeoutGroup] 2023-05-27 11:59:45,369 DEBUG [M:0;jenkins-hbase4:35547] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 11:59:45,370 INFO [M:0;jenkins-hbase4:35547] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 11:59:45,371 INFO [M:0;jenkins-hbase4:35547] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 11:59:45,371 INFO [M:0;jenkins-hbase4:35547] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 11:59:45,371 DEBUG [M:0;jenkins-hbase4:35547] master.HMaster(1512): Stopping service threads 2023-05-27 11:59:45,371 INFO [M:0;jenkins-hbase4:35547] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 11:59:45,371 ERROR [M:0;jenkins-hbase4:35547] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 11:59:45,371 INFO [M:0;jenkins-hbase4:35547] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 11:59:45,371 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 11:59:45,372 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 11:59:45,372 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:45,372 DEBUG [M:0;jenkins-hbase4:35547] zookeeper.ZKUtil(398): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 11:59:45,372 WARN [M:0;jenkins-hbase4:35547] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 11:59:45,372 INFO [M:0;jenkins-hbase4:35547] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 11:59:45,373 INFO [M:0;jenkins-hbase4:35547] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 11:59:45,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:59:45,373 DEBUG [M:0;jenkins-hbase4:35547] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:59:45,373 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:45,373 DEBUG [M:0;jenkins-hbase4:35547] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:45,373 DEBUG [M:0;jenkins-hbase4:35547] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:59:45,373 DEBUG [M:0;jenkins-hbase4:35547] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:45,373 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-05-27 11:59:45,383 INFO [M:0;jenkins-hbase4:35547] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1bc0c565fe91491a908a37e9a1c4c204 2023-05-27 11:59:45,388 INFO [M:0;jenkins-hbase4:35547] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1bc0c565fe91491a908a37e9a1c4c204 2023-05-27 11:59:45,388 DEBUG [M:0;jenkins-hbase4:35547] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1bc0c565fe91491a908a37e9a1c4c204 as hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1bc0c565fe91491a908a37e9a1c4c204 2023-05-27 11:59:45,394 INFO [M:0;jenkins-hbase4:35547] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1bc0c565fe91491a908a37e9a1c4c204 2023-05-27 11:59:45,394 INFO [M:0;jenkins-hbase4:35547] regionserver.HStore(1080): Added hdfs://localhost:33035/user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1bc0c565fe91491a908a37e9a1c4c204, entries=11, sequenceid=100, filesize=6.1 K 2023-05-27 11:59:45,395 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=100, compaction requested=false 2023-05-27 11:59:45,396 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:45,396 DEBUG [M:0;jenkins-hbase4:35547] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:59:45,396 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829c3078-0a62-6cc2-f662-00d63d2f4a52/MasterData/WALs/jenkins-hbase4.apache.org,35547,1685188723642 2023-05-27 11:59:45,399 INFO [M:0;jenkins-hbase4:35547] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 11:59:45,399 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 11:59:45,399 INFO [M:0;jenkins-hbase4:35547] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35547 2023-05-27 11:59:45,401 DEBUG [M:0;jenkins-hbase4:35547] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35547,1685188723642 already deleted, retry=false 2023-05-27 11:59:45,467 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:59:45,467 INFO [RS:0;jenkins-hbase4:34733] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34733,1685188723680; zookeeper connection closed. 2023-05-27 11:59:45,467 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): regionserver:34733-0x1006c81c6ad0001, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:59:45,468 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4ad57418] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4ad57418 2023-05-27 11:59:45,468 INFO [Listener at localhost/44023] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 11:59:45,567 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:59:45,567 INFO [M:0;jenkins-hbase4:35547] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35547,1685188723642; zookeeper connection closed. 2023-05-27 11:59:45,567 DEBUG [Listener at localhost/44023-EventThread] zookeeper.ZKWatcher(600): master:35547-0x1006c81c6ad0000, quorum=127.0.0.1:62748, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 11:59:45,568 WARN [Listener at localhost/44023] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:59:45,572 INFO [Listener at localhost/44023] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:59:45,614 WARN [BP-899670933-172.31.14.131-1685188723097 heartbeating to localhost/127.0.0.1:33035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-899670933-172.31.14.131-1685188723097 (Datanode Uuid 53b14727-60e4-4d70-8ccd-bb2e5d580c8f) service to localhost/127.0.0.1:33035 2023-05-27 11:59:45,615 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/dfs/data/data3/current/BP-899670933-172.31.14.131-1685188723097] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:59:45,615 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/dfs/data/data4/current/BP-899670933-172.31.14.131-1685188723097] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:59:45,677 WARN [Listener at localhost/44023] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 11:59:45,681 INFO [Listener at localhost/44023] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:59:45,785 WARN [BP-899670933-172.31.14.131-1685188723097 heartbeating to localhost/127.0.0.1:33035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 11:59:45,785 WARN [BP-899670933-172.31.14.131-1685188723097 heartbeating to localhost/127.0.0.1:33035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-899670933-172.31.14.131-1685188723097 (Datanode Uuid 58c91519-83be-4fc7-b0b2-b251b5035b91) service to localhost/127.0.0.1:33035 2023-05-27 11:59:45,785 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/dfs/data/data1/current/BP-899670933-172.31.14.131-1685188723097] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:59:45,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/cluster_c86b3b99-27ae-57ae-d8bb-0128a378b958/dfs/data/data2/current/BP-899670933-172.31.14.131-1685188723097] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 11:59:45,797 INFO [Listener at localhost/44023] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 11:59:45,909 INFO [Listener at localhost/44023] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 11:59:45,926 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 11:59:45,935 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 11:59:45,936 INFO [Listener at localhost/44023] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=96 (was 88) - Thread LEAK? -, OpenFileDescriptor=500 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=46 (was 40) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=3469 (was 3648) 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=96, OpenFileDescriptor=500, MaxFileDescriptor=60000, SystemLoadAverage=46, ProcessCount=169, AvailableMemoryMB=3469 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/hadoop.log.dir so I do NOT create it in target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/71e4b92c-fd9b-4970-004b-f7201d0f21b7/hadoop.tmp.dir so I do NOT create it in target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27, deleteOnExit=true 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/test.cache.data in system properties and HBase conf 2023-05-27 11:59:45,944 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/hadoop.log.dir in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 11:59:45,945 DEBUG [Listener at localhost/44023] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 11:59:45,945 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/nfs.dump.dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/java.io.tmpdir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 11:59:45,946 INFO [Listener at localhost/44023] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 11:59:45,948 WARN [Listener at localhost/44023] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:59:45,951 WARN [Listener at localhost/44023] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:59:45,951 WARN [Listener at localhost/44023] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:59:45,991 WARN [Listener at localhost/44023] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:59:45,993 INFO [Listener at localhost/44023] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:59:45,997 INFO [Listener at localhost/44023] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/java.io.tmpdir/Jetty_localhost_43713_hdfs____y4mdes/webapp 2023-05-27 11:59:46,087 INFO [Listener at localhost/44023] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43713 2023-05-27 11:59:46,089 WARN [Listener at localhost/44023] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 11:59:46,092 WARN [Listener at localhost/44023] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 11:59:46,092 WARN [Listener at localhost/44023] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 11:59:46,127 WARN [Listener at localhost/40147] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:59:46,136 WARN [Listener at localhost/40147] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:59:46,138 WARN [Listener at localhost/40147] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:59:46,139 INFO [Listener at localhost/40147] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:59:46,145 INFO [Listener at localhost/40147] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/java.io.tmpdir/Jetty_localhost_39175_datanode____.10y0og/webapp 2023-05-27 11:59:46,236 INFO [Listener at localhost/40147] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39175 2023-05-27 11:59:46,241 WARN [Listener at localhost/46625] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:59:46,253 WARN [Listener at localhost/46625] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 11:59:46,255 WARN [Listener at localhost/46625] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 11:59:46,256 INFO [Listener at localhost/46625] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 11:59:46,261 INFO [Listener at localhost/46625] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/java.io.tmpdir/Jetty_localhost_44797_datanode____.qcs3no/webapp 2023-05-27 11:59:46,336 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf96adac8781d30c9: Processing first storage report for DS-e419e145-668a-46a0-b31d-482de4c73ae4 from datanode b78a79ee-7470-4cc5-94fe-d9719490e414 2023-05-27 11:59:46,336 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf96adac8781d30c9: from storage DS-e419e145-668a-46a0-b31d-482de4c73ae4 node DatanodeRegistration(127.0.0.1:45727, datanodeUuid=b78a79ee-7470-4cc5-94fe-d9719490e414, infoPort=38435, infoSecurePort=0, ipcPort=46625, storageInfo=lv=-57;cid=testClusterID;nsid=2108253493;c=1685188785953), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:59:46,336 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf96adac8781d30c9: Processing first storage report for DS-8b9fd699-dcd0-444d-a649-4f89816b67f1 from datanode b78a79ee-7470-4cc5-94fe-d9719490e414 2023-05-27 11:59:46,336 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf96adac8781d30c9: from storage DS-8b9fd699-dcd0-444d-a649-4f89816b67f1 node DatanodeRegistration(127.0.0.1:45727, datanodeUuid=b78a79ee-7470-4cc5-94fe-d9719490e414, infoPort=38435, infoSecurePort=0, ipcPort=46625, storageInfo=lv=-57;cid=testClusterID;nsid=2108253493;c=1685188785953), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:59:46,355 INFO [Listener at localhost/46625] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44797 2023-05-27 11:59:46,361 WARN [Listener at localhost/41535] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 11:59:46,452 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xabd33d82cea9d5a: Processing first storage report for DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d from datanode 626af19d-6512-4704-8725-f56f5210ff30 2023-05-27 11:59:46,452 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xabd33d82cea9d5a: from storage DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d node DatanodeRegistration(127.0.0.1:39927, datanodeUuid=626af19d-6512-4704-8725-f56f5210ff30, infoPort=41105, infoSecurePort=0, ipcPort=41535, storageInfo=lv=-57;cid=testClusterID;nsid=2108253493;c=1685188785953), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:59:46,452 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xabd33d82cea9d5a: Processing first storage report for DS-b34dd5f5-edab-4b90-b4f1-48698b687411 from datanode 626af19d-6512-4704-8725-f56f5210ff30 2023-05-27 11:59:46,452 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xabd33d82cea9d5a: from storage DS-b34dd5f5-edab-4b90-b4f1-48698b687411 node DatanodeRegistration(127.0.0.1:39927, datanodeUuid=626af19d-6512-4704-8725-f56f5210ff30, infoPort=41105, infoSecurePort=0, ipcPort=41535, storageInfo=lv=-57;cid=testClusterID;nsid=2108253493;c=1685188785953), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 11:59:46,469 DEBUG [Listener at localhost/41535] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5 2023-05-27 11:59:46,471 INFO [Listener at localhost/41535] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/zookeeper_0, clientPort=62142, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 11:59:46,472 INFO [Listener at localhost/41535] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62142 2023-05-27 11:59:46,472 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,473 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,485 INFO [Listener at localhost/41535] util.FSUtils(471): Created version file at hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62 with version=8 2023-05-27 11:59:46,486 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase-staging 2023-05-27 11:59:46,487 INFO [Listener at localhost/41535] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:59:46,487 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:59:46,487 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:59:46,488 INFO [Listener at localhost/41535] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:59:46,488 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:59:46,488 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:59:46,488 INFO [Listener at localhost/41535] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:59:46,489 INFO [Listener at localhost/41535] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46025 2023-05-27 11:59:46,489 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,490 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,491 INFO [Listener at localhost/41535] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46025 connecting to ZooKeeper ensemble=127.0.0.1:62142 2023-05-27 11:59:46,498 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:460250x0, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:59:46,498 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46025-0x1006c82bc290000 connected 2023-05-27 11:59:46,519 DEBUG [Listener at localhost/41535] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:59:46,519 DEBUG [Listener at localhost/41535] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:59:46,520 DEBUG [Listener at localhost/41535] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:59:46,521 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46025 2023-05-27 11:59:46,521 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46025 2023-05-27 11:59:46,521 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46025 2023-05-27 11:59:46,522 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46025 2023-05-27 11:59:46,523 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46025 2023-05-27 11:59:46,523 INFO [Listener at localhost/41535] master.HMaster(444): hbase.rootdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62, hbase.cluster.distributed=false 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 11:59:46,537 INFO [Listener at localhost/41535] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 11:59:46,539 INFO [Listener at localhost/41535] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32953 2023-05-27 11:59:46,540 INFO [Listener at localhost/41535] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 11:59:46,542 DEBUG [Listener at localhost/41535] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 11:59:46,542 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,543 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,544 INFO [Listener at localhost/41535] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32953 connecting to ZooKeeper ensemble=127.0.0.1:62142 2023-05-27 11:59:46,548 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:329530x0, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 11:59:46,549 DEBUG [Listener at localhost/41535] zookeeper.ZKUtil(164): regionserver:329530x0, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 11:59:46,549 DEBUG [Listener at localhost/41535] zookeeper.ZKUtil(164): regionserver:329530x0, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 11:59:46,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32953-0x1006c82bc290001 connected 2023-05-27 11:59:46,550 DEBUG [Listener at localhost/41535] zookeeper.ZKUtil(164): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 11:59:46,553 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32953 2023-05-27 11:59:46,553 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32953 2023-05-27 11:59:46,554 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32953 2023-05-27 11:59:46,554 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32953 2023-05-27 11:59:46,554 DEBUG [Listener at localhost/41535] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32953 2023-05-27 11:59:46,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,559 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:59:46,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,560 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:59:46,560 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 11:59:46,560 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:59:46,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46025,1685188786487 from backup master directory 2023-05-27 11:59:46,562 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 11:59:46,563 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,563 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 11:59:46,563 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:59:46,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/hbase.id with ID: 1e806623-3266-4cde-bda5-8381ba4227a6 2023-05-27 11:59:46,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:46,586 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x46199476 to 127.0.0.1:62142 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:59:46,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15a2d603, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:59:46,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:59:46,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 11:59:46,598 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:59:46,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store-tmp 2023-05-27 11:59:46,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:46,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 11:59:46,605 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:46,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:46,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 11:59:46,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:46,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 11:59:46,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:59:46,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/WALs/jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46025%2C1685188786487, suffix=, logDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/WALs/jenkins-hbase4.apache.org,46025,1685188786487, archiveDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/oldWALs, maxLogs=10 2023-05-27 11:59:46,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/WALs/jenkins-hbase4.apache.org,46025,1685188786487/jenkins-hbase4.apache.org%2C46025%2C1685188786487.1685188786609 2023-05-27 11:59:46,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45727,DS-e419e145-668a-46a0-b31d-482de4c73ae4,DISK], DatanodeInfoWithStorage[127.0.0.1:39927,DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d,DISK]] 2023-05-27 11:59:46,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:59:46,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:46,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:59:46,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:59:46,616 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:59:46,617 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 11:59:46,618 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 11:59:46,618 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:46,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:59:46,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:59:46,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 11:59:46,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:59:46,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=878391, jitterRate=0.11693297326564789}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:59:46,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 11:59:46,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 11:59:46,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 11:59:46,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 11:59:46,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 11:59:46,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 11:59:46,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 11:59:46,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 11:59:46,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 11:59:46,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 11:59:46,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 11:59:46,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 11:59:46,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 11:59:46,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 11:59:46,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 11:59:46,644 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 11:59:46,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 11:59:46,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 11:59:46,647 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:59:46,647 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 11:59:46,647 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46025,1685188786487, sessionid=0x1006c82bc290000, setting cluster-up flag (Was=false) 2023-05-27 11:59:46,651 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 11:59:46,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,658 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 11:59:46,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:46,663 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.hbase-snapshot/.tmp 2023-05-27 11:59:46,665 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 11:59:46,665 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:59:46,665 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:59:46,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:59:46,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 11:59:46,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 11:59:46,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:59:46,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685188816667 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 11:59:46,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 11:59:46,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 11:59:46,668 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:59:46,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 11:59:46,668 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 11:59:46,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 11:59:46,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 11:59:46,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188786668,5,FailOnTimeoutGroup] 2023-05-27 11:59:46,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188786668,5,FailOnTimeoutGroup] 2023-05-27 11:59:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 11:59:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,669 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:59:46,679 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:59:46,679 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 11:59:46,679 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62 2023-05-27 11:59:46,687 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:46,688 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:59:46,689 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info 2023-05-27 11:59:46,689 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:59:46,690 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:46,690 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:59:46,691 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:59:46,691 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:59:46,692 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:46,692 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:59:46,693 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/table 2023-05-27 11:59:46,694 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:59:46,694 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:46,695 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740 2023-05-27 11:59:46,695 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740 2023-05-27 11:59:46,697 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:59:46,698 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:59:46,700 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:59:46,700 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=871963, jitterRate=0.10875949263572693}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:59:46,700 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:59:46,700 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 11:59:46,700 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 11:59:46,700 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 11:59:46,700 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 11:59:46,701 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 11:59:46,706 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 11:59:46,706 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 11:59:46,707 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 11:59:46,707 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 11:59:46,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 11:59:46,709 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 11:59:46,710 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 11:59:46,756 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(951): ClusterId : 1e806623-3266-4cde-bda5-8381ba4227a6 2023-05-27 11:59:46,757 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 11:59:46,759 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 11:59:46,759 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 11:59:46,760 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 11:59:46,761 DEBUG [RS:0;jenkins-hbase4:32953] zookeeper.ReadOnlyZKClient(139): Connect 0x781c3e85 to 127.0.0.1:62142 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:59:46,764 DEBUG [RS:0;jenkins-hbase4:32953] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@76dbcff6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:59:46,764 DEBUG [RS:0;jenkins-hbase4:32953] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72e455f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 11:59:46,773 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:32953 2023-05-27 11:59:46,773 INFO [RS:0;jenkins-hbase4:32953] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 11:59:46,773 INFO [RS:0;jenkins-hbase4:32953] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 11:59:46,773 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 11:59:46,774 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,46025,1685188786487 with isa=jenkins-hbase4.apache.org/172.31.14.131:32953, startcode=1685188786537 2023-05-27 11:59:46,774 DEBUG [RS:0;jenkins-hbase4:32953] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 11:59:46,776 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47091, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 11:59:46,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:46,778 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62 2023-05-27 11:59:46,778 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40147 2023-05-27 11:59:46,778 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 11:59:46,784 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 11:59:46,784 DEBUG [RS:0;jenkins-hbase4:32953] zookeeper.ZKUtil(162): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:46,784 WARN [RS:0;jenkins-hbase4:32953] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 11:59:46,784 INFO [RS:0;jenkins-hbase4:32953] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:59:46,784 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1946): logDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:46,784 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32953,1685188786537] 2023-05-27 11:59:46,788 DEBUG [RS:0;jenkins-hbase4:32953] zookeeper.ZKUtil(162): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:46,789 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 11:59:46,789 INFO [RS:0;jenkins-hbase4:32953] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 11:59:46,791 INFO [RS:0;jenkins-hbase4:32953] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 11:59:46,791 INFO [RS:0;jenkins-hbase4:32953] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 11:59:46,791 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,792 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 11:59:46,793 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,793 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,793 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,793 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,793 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,793 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,794 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 11:59:46,794 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,794 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,794 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,794 DEBUG [RS:0;jenkins-hbase4:32953] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 11:59:46,796 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,796 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,796 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,806 INFO [RS:0;jenkins-hbase4:32953] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 11:59:46,806 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32953,1685188786537-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:46,816 INFO [RS:0;jenkins-hbase4:32953] regionserver.Replication(203): jenkins-hbase4.apache.org,32953,1685188786537 started 2023-05-27 11:59:46,816 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32953,1685188786537, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32953, sessionid=0x1006c82bc290001 2023-05-27 11:59:46,816 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 11:59:46,816 DEBUG [RS:0;jenkins-hbase4:32953] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32953,1685188786537' 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32953,1685188786537' 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 11:59:46,817 DEBUG [RS:0;jenkins-hbase4:32953] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 11:59:46,818 DEBUG [RS:0;jenkins-hbase4:32953] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 11:59:46,818 INFO [RS:0;jenkins-hbase4:32953] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 11:59:46,818 INFO [RS:0;jenkins-hbase4:32953] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 11:59:46,860 DEBUG [jenkins-hbase4:46025] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 11:59:46,861 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32953,1685188786537, state=OPENING 2023-05-27 11:59:46,862 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 11:59:46,863 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:46,863 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32953,1685188786537}] 2023-05-27 11:59:46,863 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:59:46,919 INFO [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32953%2C1685188786537, suffix=, logDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537, archiveDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/oldWALs, maxLogs=32 2023-05-27 11:59:46,927 INFO [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188786920 2023-05-27 11:59:46,927 DEBUG [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39927,DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d,DISK], DatanodeInfoWithStorage[127.0.0.1:45727,DS-e419e145-668a-46a0-b31d-482de4c73ae4,DISK]] 2023-05-27 11:59:47,017 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:47,017 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 11:59:47,020 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56892, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 11:59:47,023 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 11:59:47,023 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 11:59:47,025 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32953%2C1685188786537.meta, suffix=.meta, logDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537, archiveDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/oldWALs, maxLogs=32 2023-05-27 11:59:47,032 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.meta.1685188787025.meta 2023-05-27 11:59:47,032 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39927,DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d,DISK], DatanodeInfoWithStorage[127.0.0.1:45727,DS-e419e145-668a-46a0-b31d-482de4c73ae4,DISK]] 2023-05-27 11:59:47,032 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:59:47,032 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 11:59:47,032 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 11:59:47,032 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 11:59:47,032 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 11:59:47,032 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:47,033 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 11:59:47,033 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 11:59:47,035 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 11:59:47,036 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info 2023-05-27 11:59:47,036 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info 2023-05-27 11:59:47,037 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 11:59:47,037 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:47,037 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 11:59:47,038 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:59:47,038 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/rep_barrier 2023-05-27 11:59:47,039 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 11:59:47,039 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:47,039 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 11:59:47,040 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/table 2023-05-27 11:59:47,040 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/table 2023-05-27 11:59:47,040 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 11:59:47,041 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:47,041 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740 2023-05-27 11:59:47,043 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740 2023-05-27 11:59:47,045 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 11:59:47,046 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 11:59:47,047 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=803979, jitterRate=0.02231280505657196}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 11:59:47,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 11:59:47,049 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685188787017 2023-05-27 11:59:47,052 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 11:59:47,052 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 11:59:47,053 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32953,1685188786537, state=OPEN 2023-05-27 11:59:47,055 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 11:59:47,055 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 11:59:47,057 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 11:59:47,057 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32953,1685188786537 in 192 msec 2023-05-27 11:59:47,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 11:59:47,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 350 msec 2023-05-27 11:59:47,062 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 397 msec 2023-05-27 11:59:47,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685188787062, completionTime=-1 2023-05-27 11:59:47,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 11:59:47,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 11:59:47,065 DEBUG [hconnection-0x37e2b060-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:59:47,067 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56898, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:59:47,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 11:59:47,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685188847069 2023-05-27 11:59:47,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685188907069 2023-05-27 11:59:47,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46025,1685188786487-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46025,1685188786487-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46025,1685188786487-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46025, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 11:59:47,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 11:59:47,083 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 11:59:47,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 11:59:47,085 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:59:47,086 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:59:47,087 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,088 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8 empty. 2023-05-27 11:59:47,088 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,088 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 11:59:47,098 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 11:59:47,099 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3bb19d8461bcde8cb229756ae66372b8, NAME => 'hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp 2023-05-27 11:59:47,105 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:47,106 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 3bb19d8461bcde8cb229756ae66372b8, disabling compactions & flushes 2023-05-27 11:59:47,106 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,106 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,106 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. after waiting 0 ms 2023-05-27 11:59:47,106 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,106 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,106 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 3bb19d8461bcde8cb229756ae66372b8: 2023-05-27 11:59:47,108 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:59:47,109 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188787109"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188787109"}]},"ts":"1685188787109"} 2023-05-27 11:59:47,111 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:59:47,112 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:59:47,112 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188787112"}]},"ts":"1685188787112"} 2023-05-27 11:59:47,113 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 11:59:47,121 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3bb19d8461bcde8cb229756ae66372b8, ASSIGN}] 2023-05-27 11:59:47,123 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3bb19d8461bcde8cb229756ae66372b8, ASSIGN 2023-05-27 11:59:47,124 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=3bb19d8461bcde8cb229756ae66372b8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32953,1685188786537; forceNewPlan=false, retain=false 2023-05-27 11:59:47,275 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3bb19d8461bcde8cb229756ae66372b8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:47,275 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188787275"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188787275"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188787275"}]},"ts":"1685188787275"} 2023-05-27 11:59:47,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 3bb19d8461bcde8cb229756ae66372b8, server=jenkins-hbase4.apache.org,32953,1685188786537}] 2023-05-27 11:59:47,432 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,432 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3bb19d8461bcde8cb229756ae66372b8, NAME => 'hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:59:47,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:47,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,434 INFO [StoreOpener-3bb19d8461bcde8cb229756ae66372b8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,435 DEBUG [StoreOpener-3bb19d8461bcde8cb229756ae66372b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/info 2023-05-27 11:59:47,435 DEBUG [StoreOpener-3bb19d8461bcde8cb229756ae66372b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/info 2023-05-27 11:59:47,436 INFO [StoreOpener-3bb19d8461bcde8cb229756ae66372b8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3bb19d8461bcde8cb229756ae66372b8 columnFamilyName info 2023-05-27 11:59:47,436 INFO [StoreOpener-3bb19d8461bcde8cb229756ae66372b8-1] regionserver.HStore(310): Store=3bb19d8461bcde8cb229756ae66372b8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:47,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,440 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3bb19d8461bcde8cb229756ae66372b8 2023-05-27 11:59:47,441 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:59:47,442 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3bb19d8461bcde8cb229756ae66372b8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727357, jitterRate=-0.07511892914772034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:59:47,442 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3bb19d8461bcde8cb229756ae66372b8: 2023-05-27 11:59:47,444 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8., pid=6, masterSystemTime=1685188787429 2023-05-27 11:59:47,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,446 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 11:59:47,446 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3bb19d8461bcde8cb229756ae66372b8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:47,446 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188787446"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188787446"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188787446"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188787446"}]},"ts":"1685188787446"} 2023-05-27 11:59:47,450 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 11:59:47,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 3bb19d8461bcde8cb229756ae66372b8, server=jenkins-hbase4.apache.org,32953,1685188786537 in 171 msec 2023-05-27 11:59:47,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 11:59:47,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=3bb19d8461bcde8cb229756ae66372b8, ASSIGN in 330 msec 2023-05-27 11:59:47,454 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:59:47,454 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188787454"}]},"ts":"1685188787454"} 2023-05-27 11:59:47,456 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 11:59:47,458 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:59:47,459 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 376 msec 2023-05-27 11:59:47,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 11:59:47,485 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:59:47,486 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:47,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 11:59:47,498 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:59:47,502 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-27 11:59:47,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 11:59:47,518 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 11:59:47,521 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-27 11:59:47,526 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 11:59:47,528 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 11:59:47,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.965sec 2023-05-27 11:59:47,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 11:59:47,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 11:59:47,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 11:59:47,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46025,1685188786487-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 11:59:47,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46025,1685188786487-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 11:59:47,530 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 11:59:47,557 DEBUG [Listener at localhost/41535] zookeeper.ReadOnlyZKClient(139): Connect 0x199889da to 127.0.0.1:62142 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 11:59:47,561 DEBUG [Listener at localhost/41535] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63710009, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 11:59:47,562 DEBUG [hconnection-0x58da9ce5-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 11:59:47,564 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56912, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 11:59:47,565 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 11:59:47,565 INFO [Listener at localhost/41535] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 11:59:47,569 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 11:59:47,569 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 11:59:47,570 INFO [Listener at localhost/41535] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 11:59:47,571 DEBUG [Listener at localhost/41535] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 11:59:47,577 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53406, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 11:59:47,578 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 11:59:47,578 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 11:59:47,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 11:59:47,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-27 11:59:47,585 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 11:59:47,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-27 11:59:47,586 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 11:59:47,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:59:47,588 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,588 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471 empty. 2023-05-27 11:59:47,589 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,589 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-27 11:59:47,601 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-27 11:59:47,602 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 566bdcf139c341c256bf5896c2d70471, NAME => 'TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/.tmp 2023-05-27 11:59:47,609 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:47,609 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 566bdcf139c341c256bf5896c2d70471, disabling compactions & flushes 2023-05-27 11:59:47,609 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,609 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,609 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. after waiting 0 ms 2023-05-27 11:59:47,609 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,609 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,609 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:47,611 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 11:59:47,612 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685188787612"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188787612"}]},"ts":"1685188787612"} 2023-05-27 11:59:47,613 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 11:59:47,614 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 11:59:47,614 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188787614"}]},"ts":"1685188787614"} 2023-05-27 11:59:47,615 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-27 11:59:47,619 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, ASSIGN}] 2023-05-27 11:59:47,620 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, ASSIGN 2023-05-27 11:59:47,621 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32953,1685188786537; forceNewPlan=false, retain=false 2023-05-27 11:59:47,771 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=566bdcf139c341c256bf5896c2d70471, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:47,772 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685188787771"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188787771"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188787771"}]},"ts":"1685188787771"} 2023-05-27 11:59:47,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 566bdcf139c341c256bf5896c2d70471, server=jenkins-hbase4.apache.org,32953,1685188786537}] 2023-05-27 11:59:47,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 566bdcf139c341c256bf5896c2d70471, NAME => 'TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.', STARTKEY => '', ENDKEY => ''} 2023-05-27 11:59:47,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 11:59:47,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,931 INFO [StoreOpener-566bdcf139c341c256bf5896c2d70471-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,932 DEBUG [StoreOpener-566bdcf139c341c256bf5896c2d70471-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info 2023-05-27 11:59:47,932 DEBUG [StoreOpener-566bdcf139c341c256bf5896c2d70471-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info 2023-05-27 11:59:47,932 INFO [StoreOpener-566bdcf139c341c256bf5896c2d70471-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 566bdcf139c341c256bf5896c2d70471 columnFamilyName info 2023-05-27 11:59:47,933 INFO [StoreOpener-566bdcf139c341c256bf5896c2d70471-1] regionserver.HStore(310): Store=566bdcf139c341c256bf5896c2d70471/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 11:59:47,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:47,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 11:59:47,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 566bdcf139c341c256bf5896c2d70471; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=781579, jitterRate=-0.006172120571136475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 11:59:47,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:47,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471., pid=11, masterSystemTime=1685188787926 2023-05-27 11:59:47,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:47,942 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=566bdcf139c341c256bf5896c2d70471, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 11:59:47,942 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685188787942"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188787942"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188787942"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188787942"}]},"ts":"1685188787942"} 2023-05-27 11:59:47,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 11:59:47,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 566bdcf139c341c256bf5896c2d70471, server=jenkins-hbase4.apache.org,32953,1685188786537 in 170 msec 2023-05-27 11:59:47,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 11:59:47,948 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, ASSIGN in 328 msec 2023-05-27 11:59:47,948 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 11:59:47,948 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188787948"}]},"ts":"1685188787948"} 2023-05-27 11:59:47,950 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-27 11:59:47,953 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 11:59:47,954 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 374 msec 2023-05-27 11:59:50,812 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 11:59:52,789 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 11:59:52,790 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 11:59:52,790 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-27 11:59:57,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46025] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 11:59:57,588 INFO [Listener at localhost/41535] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-27 11:59:57,590 DEBUG [Listener at localhost/41535] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-27 11:59:57,590 DEBUG [Listener at localhost/41535] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:57,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:57,602 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:59:57,614 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/a446459721ca4837b02ebe31ac8b315e 2023-05-27 11:59:57,622 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/a446459721ca4837b02ebe31ac8b315e as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/a446459721ca4837b02ebe31ac8b315e 2023-05-27 11:59:57,628 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/a446459721ca4837b02ebe31ac8b315e, entries=7, sequenceid=11, filesize=12.1 K 2023-05-27 11:59:57,629 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=21.02 KB/21520 for 566bdcf139c341c256bf5896c2d70471 in 27ms, sequenceid=11, compaction requested=false 2023-05-27 11:59:57,629 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:57,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:57,630 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-05-27 11:59:57,639 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=35 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/3c952b3f7541404a88a19c33c86f9741 2023-05-27 11:59:57,645 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/3c952b3f7541404a88a19c33c86f9741 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741 2023-05-27 11:59:57,649 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741, entries=21, sequenceid=35, filesize=26.9 K 2023-05-27 11:59:57,650 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=4.20 KB/4304 for 566bdcf139c341c256bf5896c2d70471 in 20ms, sequenceid=35, compaction requested=false 2023-05-27 11:59:57,650 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:57,650 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.0 K, sizeToCheck=16.0 K 2023-05-27 11:59:57,650 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:59:57,650 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741 because midkey is the same as first or last row 2023-05-27 11:59:59,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:59,638 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 11:59:59,650 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=45 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/4ced9f97797a4c9fa57db396f803b10d 2023-05-27 11:59:59,656 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/4ced9f97797a4c9fa57db396f803b10d as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4ced9f97797a4c9fa57db396f803b10d 2023-05-27 11:59:59,662 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4ced9f97797a4c9fa57db396f803b10d, entries=7, sequenceid=45, filesize=12.1 K 2023-05-27 11:59:59,663 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 566bdcf139c341c256bf5896c2d70471 in 25ms, sequenceid=45, compaction requested=true 2023-05-27 11:59:59,663 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:59,663 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=51.1 K, sizeToCheck=16.0 K 2023-05-27 11:59:59,663 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:59:59,663 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741 because midkey is the same as first or last row 2023-05-27 11:59:59,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 566bdcf139c341c256bf5896c2d70471 2023-05-27 11:59:59,663 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 11:59:59,664 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 11:59:59,664 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-27 11:59:59,666 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 52295 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 11:59:59,666 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 566bdcf139c341c256bf5896c2d70471/info is initiating minor compaction (all files) 2023-05-27 11:59:59,666 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 566bdcf139c341c256bf5896c2d70471/info in TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 11:59:59,666 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/a446459721ca4837b02ebe31ac8b315e, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4ced9f97797a4c9fa57db396f803b10d] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp, totalSize=51.1 K 2023-05-27 11:59:59,667 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting a446459721ca4837b02ebe31ac8b315e, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685188797593 2023-05-27 11:59:59,668 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 3c952b3f7541404a88a19c33c86f9741, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=35, earliestPutTs=1685188797603 2023-05-27 11:59:59,668 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 4ced9f97797a4c9fa57db396f803b10d, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=45, earliestPutTs=1685188797630 2023-05-27 11:59:59,680 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=67 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/33ece977d8ab41d7b2c9d6b0e593a619 2023-05-27 11:59:59,687 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 566bdcf139c341c256bf5896c2d70471#info#compaction#29 average throughput is 17.96 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 11:59:59,688 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/33ece977d8ab41d7b2c9d6b0e593a619 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/33ece977d8ab41d7b2c9d6b0e593a619 2023-05-27 11:59:59,720 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/33ece977d8ab41d7b2c9d6b0e593a619, entries=19, sequenceid=67, filesize=24.7 K 2023-05-27 11:59:59,722 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 566bdcf139c341c256bf5896c2d70471 in 58ms, sequenceid=67, compaction requested=false 2023-05-27 11:59:59,722 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:59,722 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=75.8 K, sizeToCheck=16.0 K 2023-05-27 11:59:59,722 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:59:59,723 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741 because midkey is the same as first or last row 2023-05-27 11:59:59,727 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/4e680957d91445379d59404840e27393 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393 2023-05-27 11:59:59,733 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 566bdcf139c341c256bf5896c2d70471/info of 566bdcf139c341c256bf5896c2d70471 into 4e680957d91445379d59404840e27393(size=41.7 K), total size for store is 66.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 11:59:59,733 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 11:59:59,733 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471., storeName=566bdcf139c341c256bf5896c2d70471/info, priority=13, startTime=1685188799663; duration=0sec 2023-05-27 11:59:59,734 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=66.5 K, sizeToCheck=16.0 K 2023-05-27 11:59:59,734 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 11:59:59,734 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393 because midkey is the same as first or last row 2023-05-27 11:59:59,734 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:01,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:01,682 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-27 12:00:01,693 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=82 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/f7fb465876784c13bdc8ffe108c96efc 2023-05-27 12:00:01,699 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/f7fb465876784c13bdc8ffe108c96efc as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/f7fb465876784c13bdc8ffe108c96efc 2023-05-27 12:00:01,704 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/f7fb465876784c13bdc8ffe108c96efc, entries=11, sequenceid=82, filesize=16.3 K 2023-05-27 12:00:01,705 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=18.91 KB/19368 for 566bdcf139c341c256bf5896c2d70471 in 23ms, sequenceid=82, compaction requested=true 2023-05-27 12:00:01,706 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 12:00:01,706 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=82.8 K, sizeToCheck=16.0 K 2023-05-27 12:00:01,706 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 12:00:01,706 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393 because midkey is the same as first or last row 2023-05-27 12:00:01,706 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:01,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:01,706 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-27 12:00:01,706 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:00:01,708 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 84764 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 12:00:01,708 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 566bdcf139c341c256bf5896c2d70471/info is initiating minor compaction (all files) 2023-05-27 12:00:01,708 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 566bdcf139c341c256bf5896c2d70471/info in TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 12:00:01,708 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/33ece977d8ab41d7b2c9d6b0e593a619, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/f7fb465876784c13bdc8ffe108c96efc] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp, totalSize=82.8 K 2023-05-27 12:00:01,709 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 4e680957d91445379d59404840e27393, keycount=35, bloomtype=ROW, size=41.7 K, encoding=NONE, compression=NONE, seqNum=45, earliestPutTs=1685188797593 2023-05-27 12:00:01,710 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 33ece977d8ab41d7b2c9d6b0e593a619, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=67, earliestPutTs=1685188799639 2023-05-27 12:00:01,710 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting f7fb465876784c13bdc8ffe108c96efc, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685188799665 2023-05-27 12:00:01,719 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=104 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/c670259ddbb04552852b52908a928493 2023-05-27 12:00:01,723 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=566bdcf139c341c256bf5896c2d70471, server=jenkins-hbase4.apache.org,32953,1685188786537 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 12:00:01,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] ipc.CallRunner(144): callId: 103 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:56912 deadline: 1685188811723, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=566bdcf139c341c256bf5896c2d70471, server=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:01,726 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/c670259ddbb04552852b52908a928493 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/c670259ddbb04552852b52908a928493 2023-05-27 12:00:01,728 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 566bdcf139c341c256bf5896c2d70471#info#compaction#32 average throughput is 66.70 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:00:01,733 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/c670259ddbb04552852b52908a928493, entries=19, sequenceid=104, filesize=24.7 K 2023-05-27 12:00:01,734 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 566bdcf139c341c256bf5896c2d70471 in 28ms, sequenceid=104, compaction requested=false 2023-05-27 12:00:01,734 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 12:00:01,734 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=107.5 K, sizeToCheck=16.0 K 2023-05-27 12:00:01,735 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 12:00:01,735 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393 because midkey is the same as first or last row 2023-05-27 12:00:01,740 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/ef7cee17d7224b10a7f0db368f9b3544 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544 2023-05-27 12:00:01,746 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 566bdcf139c341c256bf5896c2d70471/info of 566bdcf139c341c256bf5896c2d70471 into ef7cee17d7224b10a7f0db368f9b3544(size=73.5 K), total size for store is 98.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:00:01,746 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 12:00:01,746 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471., storeName=566bdcf139c341c256bf5896c2d70471/info, priority=13, startTime=1685188801706; duration=0sec 2023-05-27 12:00:01,746 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=98.3 K, sizeToCheck=16.0 K 2023-05-27 12:00:01,746 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 12:00:01,747 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:01,747 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:01,748 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46025] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,32953,1685188786537, parent={ENCODED => 566bdcf139c341c256bf5896c2d70471, NAME => 'TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-27 12:00:01,755 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46025] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:01,761 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46025] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=566bdcf139c341c256bf5896c2d70471, daughterA=28b1bff25082546958403d48c0632485, daughterB=6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:01,762 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=566bdcf139c341c256bf5896c2d70471, daughterA=28b1bff25082546958403d48c0632485, daughterB=6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:01,762 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=566bdcf139c341c256bf5896c2d70471, daughterA=28b1bff25082546958403d48c0632485, daughterB=6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:01,762 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=566bdcf139c341c256bf5896c2d70471, daughterA=28b1bff25082546958403d48c0632485, daughterB=6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:01,771 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, UNASSIGN}] 2023-05-27 12:00:01,772 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, UNASSIGN 2023-05-27 12:00:01,773 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=566bdcf139c341c256bf5896c2d70471, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:01,773 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685188801773"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188801773"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188801773"}]},"ts":"1685188801773"} 2023-05-27 12:00:01,775 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 566bdcf139c341c256bf5896c2d70471, server=jenkins-hbase4.apache.org,32953,1685188786537}] 2023-05-27 12:00:01,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:01,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 566bdcf139c341c256bf5896c2d70471, disabling compactions & flushes 2023-05-27 12:00:01,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 12:00:01,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 12:00:01,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. after waiting 0 ms 2023-05-27 12:00:01,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 12:00:01,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 566bdcf139c341c256bf5896c2d70471 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-27 12:00:01,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=118 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/8c635b9f9305483383d95ba235addead 2023-05-27 12:00:01,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.tmp/info/8c635b9f9305483383d95ba235addead as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/8c635b9f9305483383d95ba235addead 2023-05-27 12:00:01,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/8c635b9f9305483383d95ba235addead, entries=10, sequenceid=118, filesize=15.3 K 2023-05-27 12:00:01,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for 566bdcf139c341c256bf5896c2d70471 in 24ms, sequenceid=118, compaction requested=true 2023-05-27 12:00:01,964 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/a446459721ca4837b02ebe31ac8b315e, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4ced9f97797a4c9fa57db396f803b10d, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/33ece977d8ab41d7b2c9d6b0e593a619, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/f7fb465876784c13bdc8ffe108c96efc] to archive 2023-05-27 12:00:01,964 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 12:00:01,966 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/a446459721ca4837b02ebe31ac8b315e to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/a446459721ca4837b02ebe31ac8b315e 2023-05-27 12:00:01,967 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/3c952b3f7541404a88a19c33c86f9741 2023-05-27 12:00:01,969 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4e680957d91445379d59404840e27393 2023-05-27 12:00:01,970 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4ced9f97797a4c9fa57db396f803b10d to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/4ced9f97797a4c9fa57db396f803b10d 2023-05-27 12:00:01,971 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/33ece977d8ab41d7b2c9d6b0e593a619 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/33ece977d8ab41d7b2c9d6b0e593a619 2023-05-27 12:00:01,972 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/f7fb465876784c13bdc8ffe108c96efc to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/f7fb465876784c13bdc8ffe108c96efc 2023-05-27 12:00:01,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=1 2023-05-27 12:00:01,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. 2023-05-27 12:00:01,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 566bdcf139c341c256bf5896c2d70471: 2023-05-27 12:00:01,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:01,982 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=566bdcf139c341c256bf5896c2d70471, regionState=CLOSED 2023-05-27 12:00:01,982 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685188801982"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188801982"}]},"ts":"1685188801982"} 2023-05-27 12:00:01,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-27 12:00:01,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 566bdcf139c341c256bf5896c2d70471, server=jenkins-hbase4.apache.org,32953,1685188786537 in 209 msec 2023-05-27 12:00:01,988 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-27 12:00:01,988 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=566bdcf139c341c256bf5896c2d70471, UNASSIGN in 215 msec 2023-05-27 12:00:01,999 INFO [PEWorker-3] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=566bdcf139c341c256bf5896c2d70471, threads=3 2023-05-27 12:00:02,001 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/8c635b9f9305483383d95ba235addead for region: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,001 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/c670259ddbb04552852b52908a928493 for region: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,001 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544 for region: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,012 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/c670259ddbb04552852b52908a928493, top=true 2023-05-27 12:00:02,013 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/8c635b9f9305483383d95ba235addead, top=true 2023-05-27 12:00:02,025 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.splits/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493 for child: 6e8ea9646b960345acad339e7776ad1d, parent: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,025 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/c670259ddbb04552852b52908a928493 for region: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,025 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/.splits/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead for child: 6e8ea9646b960345acad339e7776ad1d, parent: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,025 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/8c635b9f9305483383d95ba235addead for region: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,036 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544 for region: 566bdcf139c341c256bf5896c2d70471 2023-05-27 12:00:02,036 DEBUG [PEWorker-3] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 566bdcf139c341c256bf5896c2d70471 Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-05-27 12:00:02,065 DEBUG [PEWorker-3] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-05-27 12:00:02,067 DEBUG [PEWorker-3] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-05-27 12:00:02,069 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685188802069"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685188802069"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685188802069"}]},"ts":"1685188802069"} 2023-05-27 12:00:02,069 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685188802069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188802069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188802069"}]},"ts":"1685188802069"} 2023-05-27 12:00:02,069 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685188802069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188802069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188802069"}]},"ts":"1685188802069"} 2023-05-27 12:00:02,109 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32953] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-27 12:00:02,109 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-27 12:00:02,110 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-27 12:00:02,118 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=28b1bff25082546958403d48c0632485, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6e8ea9646b960345acad339e7776ad1d, ASSIGN}] 2023-05-27 12:00:02,119 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=28b1bff25082546958403d48c0632485, ASSIGN 2023-05-27 12:00:02,119 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6e8ea9646b960345acad339e7776ad1d, ASSIGN 2023-05-27 12:00:02,120 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/.tmp/info/841877709d754c1881e9731dff24d41a 2023-05-27 12:00:02,120 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6e8ea9646b960345acad339e7776ad1d, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,32953,1685188786537; forceNewPlan=false, retain=false 2023-05-27 12:00:02,120 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=28b1bff25082546958403d48c0632485, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,32953,1685188786537; forceNewPlan=false, retain=false 2023-05-27 12:00:02,132 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/.tmp/table/d3c8b3f8b85a43e3873494901ab3915a 2023-05-27 12:00:02,137 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/.tmp/info/841877709d754c1881e9731dff24d41a as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info/841877709d754c1881e9731dff24d41a 2023-05-27 12:00:02,141 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info/841877709d754c1881e9731dff24d41a, entries=29, sequenceid=17, filesize=8.6 K 2023-05-27 12:00:02,142 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/.tmp/table/d3c8b3f8b85a43e3873494901ab3915a as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/table/d3c8b3f8b85a43e3873494901ab3915a 2023-05-27 12:00:02,146 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/table/d3c8b3f8b85a43e3873494901ab3915a, entries=4, sequenceid=17, filesize=4.8 K 2023-05-27 12:00:02,147 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 37ms, sequenceid=17, compaction requested=false 2023-05-27 12:00:02,148 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-27 12:00:02,271 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=6e8ea9646b960345acad339e7776ad1d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:02,271 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=28b1bff25082546958403d48c0632485, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:02,272 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685188802271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188802271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188802271"}]},"ts":"1685188802271"} 2023-05-27 12:00:02,272 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685188802271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188802271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188802271"}]},"ts":"1685188802271"} 2023-05-27 12:00:02,273 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537}] 2023-05-27 12:00:02,274 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 28b1bff25082546958403d48c0632485, server=jenkins-hbase4.apache.org,32953,1685188786537}] 2023-05-27 12:00:02,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:02,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e8ea9646b960345acad339e7776ad1d, NAME => 'TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-27 12:00:02,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:00:02,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,431 INFO [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,432 DEBUG [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info 2023-05-27 12:00:02,432 DEBUG [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info 2023-05-27 12:00:02,432 INFO [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e8ea9646b960345acad339e7776ad1d columnFamilyName info 2023-05-27 12:00:02,442 DEBUG [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] regionserver.HStore(539): loaded hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead 2023-05-27 12:00:02,446 DEBUG [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] regionserver.HStore(539): loaded hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493 2023-05-27 12:00:02,454 DEBUG [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] regionserver.HStore(539): loaded hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471->hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544-top 2023-05-27 12:00:02,454 INFO [StoreOpener-6e8ea9646b960345acad339e7776ad1d-1] regionserver.HStore(310): Store=6e8ea9646b960345acad339e7776ad1d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:00:02,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:02,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6e8ea9646b960345acad339e7776ad1d; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=841938, jitterRate=0.07058064639568329}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 12:00:02,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:02,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., pid=17, masterSystemTime=1685188802425 2023-05-27 12:00:02,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:02,461 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:00:02,463 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:02,463 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 6e8ea9646b960345acad339e7776ad1d/info is initiating minor compaction (all files) 2023-05-27 12:00:02,463 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6e8ea9646b960345acad339e7776ad1d/info in TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:02,463 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471->hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544-top, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp, totalSize=113.5 K 2023-05-27 12:00:02,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:02,463 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:02,463 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:00:02,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 28b1bff25082546958403d48c0632485, NAME => 'TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-27 12:00:02,463 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1685188797593 2023-05-27 12:00:02,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:00:02,464 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=6e8ea9646b960345acad339e7776ad1d, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:02,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,464 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685188802463"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188802463"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188802463"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188802463"}]},"ts":"1685188802463"} 2023-05-27 12:00:02,464 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=104, earliestPutTs=1685188801683 2023-05-27 12:00:02,464 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685188801707 2023-05-27 12:00:02,465 INFO [StoreOpener-28b1bff25082546958403d48c0632485-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,466 DEBUG [StoreOpener-28b1bff25082546958403d48c0632485-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info 2023-05-27 12:00:02,466 DEBUG [StoreOpener-28b1bff25082546958403d48c0632485-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info 2023-05-27 12:00:02,466 INFO [StoreOpener-28b1bff25082546958403d48c0632485-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 28b1bff25082546958403d48c0632485 columnFamilyName info 2023-05-27 12:00:02,467 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-05-27 12:00:02,468 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 in 192 msec 2023-05-27 12:00:02,469 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6e8ea9646b960345acad339e7776ad1d, ASSIGN in 350 msec 2023-05-27 12:00:02,474 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6e8ea9646b960345acad339e7776ad1d#info#compaction#36 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:00:02,475 DEBUG [StoreOpener-28b1bff25082546958403d48c0632485-1] regionserver.HStore(539): loaded hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471->hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544-bottom 2023-05-27 12:00:02,475 INFO [StoreOpener-28b1bff25082546958403d48c0632485-1] regionserver.HStore(310): Store=28b1bff25082546958403d48c0632485/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:00:02,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 28b1bff25082546958403d48c0632485 2023-05-27 12:00:02,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 28b1bff25082546958403d48c0632485; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=838465, jitterRate=0.06616455316543579}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 12:00:02,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 28b1bff25082546958403d48c0632485: 2023-05-27 12:00:02,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485., pid=18, masterSystemTime=1685188802425 2023-05-27 12:00:02,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:02,486 DEBUG [RS:0;jenkins-hbase4:32953-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-27 12:00:02,487 INFO [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:00:02,487 DEBUG [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HStore(1912): 28b1bff25082546958403d48c0632485/info is initiating minor compaction (all files) 2023-05-27 12:00:02,487 INFO [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 28b1bff25082546958403d48c0632485/info in TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:00:02,487 INFO [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471->hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544-bottom] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/.tmp, totalSize=73.5 K 2023-05-27 12:00:02,488 DEBUG [RS:0;jenkins-hbase4:32953-longCompactions-0] compactions.Compactor(207): Compacting ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685188797593 2023-05-27 12:00:02,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:00:02,488 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:00:02,488 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=28b1bff25082546958403d48c0632485, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:02,489 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685188802488"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188802488"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188802488"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188802488"}]},"ts":"1685188802488"} 2023-05-27 12:00:02,490 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/7af3f2c5b793462c8de3f1084bda0d4a as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/7af3f2c5b793462c8de3f1084bda0d4a 2023-05-27 12:00:02,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-05-27 12:00:02,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 28b1bff25082546958403d48c0632485, server=jenkins-hbase4.apache.org,32953,1685188786537 in 216 msec 2023-05-27 12:00:02,495 INFO [RS:0;jenkins-hbase4:32953-longCompactions-0] throttle.PressureAwareThroughputController(145): 28b1bff25082546958403d48c0632485#info#compaction#37 average throughput is 62.60 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:00:02,496 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-05-27 12:00:02,496 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=28b1bff25082546958403d48c0632485, ASSIGN in 376 msec 2023-05-27 12:00:02,497 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=566bdcf139c341c256bf5896c2d70471, daughterA=28b1bff25082546958403d48c0632485, daughterB=6e8ea9646b960345acad339e7776ad1d in 740 msec 2023-05-27 12:00:02,499 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6e8ea9646b960345acad339e7776ad1d/info of 6e8ea9646b960345acad339e7776ad1d into 7af3f2c5b793462c8de3f1084bda0d4a(size=39.8 K), total size for store is 39.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:00:02,499 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:02,499 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., storeName=6e8ea9646b960345acad339e7776ad1d/info, priority=13, startTime=1685188802460; duration=0sec 2023-05-27 12:00:02,499 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:02,513 DEBUG [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/.tmp/info/8f841ea41ed048948b44c3dfb16473cc as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info/8f841ea41ed048948b44c3dfb16473cc 2023-05-27 12:00:02,519 INFO [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 28b1bff25082546958403d48c0632485/info of 28b1bff25082546958403d48c0632485 into 8f841ea41ed048948b44c3dfb16473cc(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:00:02,520 DEBUG [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 28b1bff25082546958403d48c0632485: 2023-05-27 12:00:02,520 INFO [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485., storeName=28b1bff25082546958403d48c0632485/info, priority=15, startTime=1685188802485; duration=0sec 2023-05-27 12:00:02,520 DEBUG [RS:0;jenkins-hbase4:32953-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:07,536 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 12:00:11,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] ipc.CallRunner(144): callId: 105 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:56912 deadline: 1685188821778, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685188787578.566bdcf139c341c256bf5896c2d70471. is not online on jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:32,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=30, reuseRatio=69.77% 2023-05-27 12:00:32,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-27 12:00:33,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:33,875 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 12:00:33,889 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/0d90ff1e9c0f4fc2acedd2f811ea307b 2023-05-27 12:00:33,896 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/0d90ff1e9c0f4fc2acedd2f811ea307b as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0d90ff1e9c0f4fc2acedd2f811ea307b 2023-05-27 12:00:33,901 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0d90ff1e9c0f4fc2acedd2f811ea307b, entries=7, sequenceid=132, filesize=12.1 K 2023-05-27 12:00:33,902 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for 6e8ea9646b960345acad339e7776ad1d in 27ms, sequenceid=132, compaction requested=false 2023-05-27 12:00:33,902 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:33,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:33,903 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-05-27 12:00:33,919 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=157 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/5e693539a4d74a87a2fed63d65d7b7fa 2023-05-27 12:00:33,925 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/5e693539a4d74a87a2fed63d65d7b7fa as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/5e693539a4d74a87a2fed63d65d7b7fa 2023-05-27 12:00:33,931 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/5e693539a4d74a87a2fed63d65d7b7fa, entries=22, sequenceid=157, filesize=27.9 K 2023-05-27 12:00:33,932 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=5.25 KB/5380 for 6e8ea9646b960345acad339e7776ad1d in 29ms, sequenceid=157, compaction requested=true 2023-05-27 12:00:33,932 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:33,932 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:33,932 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:00:33,933 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 81719 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 12:00:33,933 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 6e8ea9646b960345acad339e7776ad1d/info is initiating minor compaction (all files) 2023-05-27 12:00:33,933 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6e8ea9646b960345acad339e7776ad1d/info in TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:33,933 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/7af3f2c5b793462c8de3f1084bda0d4a, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0d90ff1e9c0f4fc2acedd2f811ea307b, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/5e693539a4d74a87a2fed63d65d7b7fa] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp, totalSize=79.8 K 2023-05-27 12:00:33,934 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 7af3f2c5b793462c8de3f1084bda0d4a, keycount=33, bloomtype=ROW, size=39.8 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685188799677 2023-05-27 12:00:33,934 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 0d90ff1e9c0f4fc2acedd2f811ea307b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1685188831867 2023-05-27 12:00:33,934 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 5e693539a4d74a87a2fed63d65d7b7fa, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=157, earliestPutTs=1685188833875 2023-05-27 12:00:33,945 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6e8ea9646b960345acad339e7776ad1d#info#compaction#40 average throughput is 63.62 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:00:33,960 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/4c566a3c0cb34dd2ab537f5b1475783d as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/4c566a3c0cb34dd2ab537f5b1475783d 2023-05-27 12:00:33,966 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6e8ea9646b960345acad339e7776ad1d/info of 6e8ea9646b960345acad339e7776ad1d into 4c566a3c0cb34dd2ab537f5b1475783d(size=70.5 K), total size for store is 70.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:00:33,966 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:33,966 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., storeName=6e8ea9646b960345acad339e7776ad1d/info, priority=13, startTime=1685188833932; duration=0sec 2023-05-27 12:00:33,966 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:35,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:35,913 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 12:00:35,934 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=168 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/af0665bbbe5f475ba89f66cbf29cf9cb 2023-05-27 12:00:35,938 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 12:00:35,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] ipc.CallRunner(144): callId: 167 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:56912 deadline: 1685188845938, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:35,942 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/af0665bbbe5f475ba89f66cbf29cf9cb as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/af0665bbbe5f475ba89f66cbf29cf9cb 2023-05-27 12:00:35,948 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/af0665bbbe5f475ba89f66cbf29cf9cb, entries=7, sequenceid=168, filesize=12.1 K 2023-05-27 12:00:35,949 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 6e8ea9646b960345acad339e7776ad1d in 36ms, sequenceid=168, compaction requested=false 2023-05-27 12:00:35,949 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:39,665 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 12:00:46,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:46,035 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-27 12:00:46,045 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 12:00:46,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] ipc.CallRunner(144): callId: 176 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:56912 deadline: 1685188856045, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:46,049 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/8acb5f4ac5ba49a58a943d7a8bade99a 2023-05-27 12:00:46,055 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/8acb5f4ac5ba49a58a943d7a8bade99a as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/8acb5f4ac5ba49a58a943d7a8bade99a 2023-05-27 12:00:46,060 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/8acb5f4ac5ba49a58a943d7a8bade99a, entries=23, sequenceid=194, filesize=29.0 K 2023-05-27 12:00:46,061 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=6.30 KB/6456 for 6e8ea9646b960345acad339e7776ad1d in 26ms, sequenceid=194, compaction requested=true 2023-05-27 12:00:46,061 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:46,061 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:46,061 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:00:46,063 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 114302 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 12:00:46,063 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 6e8ea9646b960345acad339e7776ad1d/info is initiating minor compaction (all files) 2023-05-27 12:00:46,063 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6e8ea9646b960345acad339e7776ad1d/info in TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:46,063 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/4c566a3c0cb34dd2ab537f5b1475783d, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/af0665bbbe5f475ba89f66cbf29cf9cb, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/8acb5f4ac5ba49a58a943d7a8bade99a] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp, totalSize=111.6 K 2023-05-27 12:00:46,063 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 4c566a3c0cb34dd2ab537f5b1475783d, keycount=62, bloomtype=ROW, size=70.5 K, encoding=NONE, compression=NONE, seqNum=157, earliestPutTs=1685188799677 2023-05-27 12:00:46,064 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting af0665bbbe5f475ba89f66cbf29cf9cb, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=168, earliestPutTs=1685188833904 2023-05-27 12:00:46,064 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 8acb5f4ac5ba49a58a943d7a8bade99a, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=194, earliestPutTs=1685188835914 2023-05-27 12:00:46,075 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6e8ea9646b960345acad339e7776ad1d#info#compaction#43 average throughput is 94.41 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:00:46,088 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/3172147a9d8847cca2a1ca33b528f592 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/3172147a9d8847cca2a1ca33b528f592 2023-05-27 12:00:46,094 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6e8ea9646b960345acad339e7776ad1d/info of 6e8ea9646b960345acad339e7776ad1d into 3172147a9d8847cca2a1ca33b528f592(size=102.2 K), total size for store is 102.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:00:46,094 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:46,094 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., storeName=6e8ea9646b960345acad339e7776ad1d/info, priority=13, startTime=1685188846061; duration=0sec 2023-05-27 12:00:46,094 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:56,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:56,051 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 12:00:56,059 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=205 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/ff3a4aa6af95450391b321c9e3ebb6b7 2023-05-27 12:00:56,064 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/ff3a4aa6af95450391b321c9e3ebb6b7 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ff3a4aa6af95450391b321c9e3ebb6b7 2023-05-27 12:00:56,069 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ff3a4aa6af95450391b321c9e3ebb6b7, entries=7, sequenceid=205, filesize=12.1 K 2023-05-27 12:00:56,070 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 6e8ea9646b960345acad339e7776ad1d in 19ms, sequenceid=205, compaction requested=false 2023-05-27 12:00:56,070 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:58,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:00:58,059 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 12:00:58,074 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=215 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/0b4eb5614617401287b19a33abb28109 2023-05-27 12:00:58,080 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/0b4eb5614617401287b19a33abb28109 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0b4eb5614617401287b19a33abb28109 2023-05-27 12:00:58,084 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 12:00:58,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] ipc.CallRunner(144): callId: 208 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:56912 deadline: 1685188868083, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:00:58,085 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0b4eb5614617401287b19a33abb28109, entries=7, sequenceid=215, filesize=12.1 K 2023-05-27 12:00:58,086 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 6e8ea9646b960345acad339e7776ad1d in 28ms, sequenceid=215, compaction requested=true 2023-05-27 12:00:58,086 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:58,086 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:00:58,086 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:00:58,087 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 129484 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 12:00:58,087 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 6e8ea9646b960345acad339e7776ad1d/info is initiating minor compaction (all files) 2023-05-27 12:00:58,087 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6e8ea9646b960345acad339e7776ad1d/info in TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:00:58,087 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/3172147a9d8847cca2a1ca33b528f592, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ff3a4aa6af95450391b321c9e3ebb6b7, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0b4eb5614617401287b19a33abb28109] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp, totalSize=126.4 K 2023-05-27 12:00:58,087 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 3172147a9d8847cca2a1ca33b528f592, keycount=92, bloomtype=ROW, size=102.2 K, encoding=NONE, compression=NONE, seqNum=194, earliestPutTs=1685188799677 2023-05-27 12:00:58,088 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting ff3a4aa6af95450391b321c9e3ebb6b7, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=205, earliestPutTs=1685188846036 2023-05-27 12:00:58,088 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 0b4eb5614617401287b19a33abb28109, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=215, earliestPutTs=1685188858052 2023-05-27 12:00:58,097 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6e8ea9646b960345acad339e7776ad1d#info#compaction#46 average throughput is 108.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:00:58,108 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/0f46dbd95589449f9afdef6cf159f44c as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0f46dbd95589449f9afdef6cf159f44c 2023-05-27 12:00:58,113 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6e8ea9646b960345acad339e7776ad1d/info of 6e8ea9646b960345acad339e7776ad1d into 0f46dbd95589449f9afdef6cf159f44c(size=117.1 K), total size for store is 117.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:00:58,114 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:00:58,114 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., storeName=6e8ea9646b960345acad339e7776ad1d/info, priority=13, startTime=1685188858086; duration=0sec 2023-05-27 12:00:58,114 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:01:08,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:08,138 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-27 12:01:08,148 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=242 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/df9da3f85e5441a2b5462a89539f10a0 2023-05-27 12:01:08,154 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/df9da3f85e5441a2b5462a89539f10a0 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/df9da3f85e5441a2b5462a89539f10a0 2023-05-27 12:01:08,158 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/df9da3f85e5441a2b5462a89539f10a0, entries=23, sequenceid=242, filesize=29.0 K 2023-05-27 12:01:08,159 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=2.10 KB/2152 for 6e8ea9646b960345acad339e7776ad1d in 21ms, sequenceid=242, compaction requested=false 2023-05-27 12:01:08,159 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:10,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:10,148 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 12:01:10,160 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=252 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/250f922d260d482bb74345c6082219be 2023-05-27 12:01:10,166 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/250f922d260d482bb74345c6082219be as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/250f922d260d482bb74345c6082219be 2023-05-27 12:01:10,172 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/250f922d260d482bb74345c6082219be, entries=7, sequenceid=252, filesize=12.1 K 2023-05-27 12:01:10,172 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 6e8ea9646b960345acad339e7776ad1d in 24ms, sequenceid=252, compaction requested=true 2023-05-27 12:01:10,173 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:10,173 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:01:10,173 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:01:10,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:10,173 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-27 12:01:10,174 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 161946 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 12:01:10,174 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 6e8ea9646b960345acad339e7776ad1d/info is initiating minor compaction (all files) 2023-05-27 12:01:10,174 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6e8ea9646b960345acad339e7776ad1d/info in TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:10,174 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0f46dbd95589449f9afdef6cf159f44c, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/df9da3f85e5441a2b5462a89539f10a0, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/250f922d260d482bb74345c6082219be] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp, totalSize=158.2 K 2023-05-27 12:01:10,175 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 0f46dbd95589449f9afdef6cf159f44c, keycount=106, bloomtype=ROW, size=117.1 K, encoding=NONE, compression=NONE, seqNum=215, earliestPutTs=1685188799677 2023-05-27 12:01:10,175 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting df9da3f85e5441a2b5462a89539f10a0, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=242, earliestPutTs=1685188858059 2023-05-27 12:01:10,176 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 250f922d260d482bb74345c6082219be, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=252, earliestPutTs=1685188868139 2023-05-27 12:01:10,197 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=278 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/955034815dd743aab3c08480c476e6a6 2023-05-27 12:01:10,201 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6e8ea9646b960345acad339e7776ad1d#info#compaction#50 average throughput is 69.78 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:01:10,203 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/955034815dd743aab3c08480c476e6a6 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/955034815dd743aab3c08480c476e6a6 2023-05-27 12:01:10,208 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/955034815dd743aab3c08480c476e6a6, entries=23, sequenceid=278, filesize=29.0 K 2023-05-27 12:01:10,208 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=4.20 KB/4304 for 6e8ea9646b960345acad339e7776ad1d in 35ms, sequenceid=278, compaction requested=false 2023-05-27 12:01:10,209 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:10,217 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/69e395a2de7a4939afb89cc7c9171f03 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/69e395a2de7a4939afb89cc7c9171f03 2023-05-27 12:01:10,222 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6e8ea9646b960345acad339e7776ad1d/info of 6e8ea9646b960345acad339e7776ad1d into 69e395a2de7a4939afb89cc7c9171f03(size=148.9 K), total size for store is 177.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:01:10,222 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:10,222 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., storeName=6e8ea9646b960345acad339e7776ad1d/info, priority=13, startTime=1685188870173; duration=0sec 2023-05-27 12:01:10,222 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:01:12,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:12,182 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 12:01:12,199 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=289 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/915ea3919323445d8af1ada25b53df10 2023-05-27 12:01:12,205 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/915ea3919323445d8af1ada25b53df10 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/915ea3919323445d8af1ada25b53df10 2023-05-27 12:01:12,208 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 12:01:12,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] ipc.CallRunner(144): callId: 270 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:56912 deadline: 1685188882208, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6e8ea9646b960345acad339e7776ad1d, server=jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:01:12,210 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/915ea3919323445d8af1ada25b53df10, entries=7, sequenceid=289, filesize=12.1 K 2023-05-27 12:01:12,211 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 6e8ea9646b960345acad339e7776ad1d in 29ms, sequenceid=289, compaction requested=true 2023-05-27 12:01:12,211 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:12,211 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:01:12,211 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 12:01:12,212 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 194621 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 12:01:12,212 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1912): 6e8ea9646b960345acad339e7776ad1d/info is initiating minor compaction (all files) 2023-05-27 12:01:12,213 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6e8ea9646b960345acad339e7776ad1d/info in TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:12,213 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/69e395a2de7a4939afb89cc7c9171f03, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/955034815dd743aab3c08480c476e6a6, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/915ea3919323445d8af1ada25b53df10] into tmpdir=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp, totalSize=190.1 K 2023-05-27 12:01:12,213 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 69e395a2de7a4939afb89cc7c9171f03, keycount=136, bloomtype=ROW, size=148.9 K, encoding=NONE, compression=NONE, seqNum=252, earliestPutTs=1685188799677 2023-05-27 12:01:12,213 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 955034815dd743aab3c08480c476e6a6, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=278, earliestPutTs=1685188870148 2023-05-27 12:01:12,214 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] compactions.Compactor(207): Compacting 915ea3919323445d8af1ada25b53df10, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=289, earliestPutTs=1685188870174 2023-05-27 12:01:12,224 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6e8ea9646b960345acad339e7776ad1d#info#compaction#52 average throughput is 85.17 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 12:01:12,239 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/ac6e72ff497f4c3aa5d4453bf479b42b as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ac6e72ff497f4c3aa5d4453bf479b42b 2023-05-27 12:01:12,244 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6e8ea9646b960345acad339e7776ad1d/info of 6e8ea9646b960345acad339e7776ad1d into ac6e72ff497f4c3aa5d4453bf479b42b(size=180.7 K), total size for store is 180.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 12:01:12,244 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:12,244 INFO [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., storeName=6e8ea9646b960345acad339e7776ad1d/info, priority=13, startTime=1685188872211; duration=0sec 2023-05-27 12:01:12,244 DEBUG [RS:0;jenkins-hbase4:32953-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 12:01:22,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32953] regionserver.HRegion(9158): Flush requested on 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:22,230 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-27 12:01:22,239 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=316 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/e803a48424bb4ff5a3c65c6eaba9fc2f 2023-05-27 12:01:22,245 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/e803a48424bb4ff5a3c65c6eaba9fc2f as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/e803a48424bb4ff5a3c65c6eaba9fc2f 2023-05-27 12:01:22,249 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/e803a48424bb4ff5a3c65c6eaba9fc2f, entries=23, sequenceid=316, filesize=29.0 K 2023-05-27 12:01:22,249 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=6.30 KB/6456 for 6e8ea9646b960345acad339e7776ad1d in 19ms, sequenceid=316, compaction requested=false 2023-05-27 12:01:22,249 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:24,237 INFO [Listener at localhost/41535] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-27 12:01:24,250 INFO [Listener at localhost/41535] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188786920 with entries=308, filesize=306.60 KB; new WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188884237 2023-05-27 12:01:24,251 DEBUG [Listener at localhost/41535] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45727,DS-e419e145-668a-46a0-b31d-482de4c73ae4,DISK], DatanodeInfoWithStorage[127.0.0.1:39927,DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d,DISK]] 2023-05-27 12:01:24,251 DEBUG [Listener at localhost/41535] wal.AbstractFSWAL(716): hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188786920 is not closed yet, will try archiving it next time 2023-05-27 12:01:24,257 INFO [Listener at localhost/41535] regionserver.HRegion(2745): Flushing 6e8ea9646b960345acad339e7776ad1d 1/1 column families, dataSize=6.30 KB heapSize=7 KB 2023-05-27 12:01:24,265 INFO [Listener at localhost/41535] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.30 KB at sequenceid=325 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/745f0c1742e5448cbd921c96e672b432 2023-05-27 12:01:24,270 DEBUG [Listener at localhost/41535] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/.tmp/info/745f0c1742e5448cbd921c96e672b432 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/745f0c1742e5448cbd921c96e672b432 2023-05-27 12:01:24,274 INFO [Listener at localhost/41535] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/745f0c1742e5448cbd921c96e672b432, entries=6, sequenceid=325, filesize=11.1 K 2023-05-27 12:01:24,275 INFO [Listener at localhost/41535] regionserver.HRegion(2948): Finished flush of dataSize ~6.30 KB/6456, heapSize ~6.98 KB/7152, currentSize=0 B/0 for 6e8ea9646b960345acad339e7776ad1d in 18ms, sequenceid=325, compaction requested=true 2023-05-27 12:01:24,275 DEBUG [Listener at localhost/41535] regionserver.HRegion(2446): Flush status journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:24,275 INFO [Listener at localhost/41535] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-27 12:01:24,285 INFO [Listener at localhost/41535] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/.tmp/info/0eb7f2dc9c334359bddf59d5cfd468de 2023-05-27 12:01:24,290 DEBUG [Listener at localhost/41535] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/.tmp/info/0eb7f2dc9c334359bddf59d5cfd468de as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info/0eb7f2dc9c334359bddf59d5cfd468de 2023-05-27 12:01:24,294 INFO [Listener at localhost/41535] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/info/0eb7f2dc9c334359bddf59d5cfd468de, entries=16, sequenceid=24, filesize=7.0 K 2023-05-27 12:01:24,295 INFO [Listener at localhost/41535] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 20ms, sequenceid=24, compaction requested=false 2023-05-27 12:01:24,295 DEBUG [Listener at localhost/41535] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-27 12:01:24,295 INFO [Listener at localhost/41535] regionserver.HRegion(2745): Flushing 3bb19d8461bcde8cb229756ae66372b8 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 12:01:24,303 INFO [Listener at localhost/41535] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/.tmp/info/a60ee688957c437ea8b735773f8758c3 2023-05-27 12:01:24,307 DEBUG [Listener at localhost/41535] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/.tmp/info/a60ee688957c437ea8b735773f8758c3 as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/info/a60ee688957c437ea8b735773f8758c3 2023-05-27 12:01:24,311 INFO [Listener at localhost/41535] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/info/a60ee688957c437ea8b735773f8758c3, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 12:01:24,312 INFO [Listener at localhost/41535] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 3bb19d8461bcde8cb229756ae66372b8 in 17ms, sequenceid=6, compaction requested=false 2023-05-27 12:01:24,312 DEBUG [Listener at localhost/41535] regionserver.HRegion(2446): Flush status journal for 3bb19d8461bcde8cb229756ae66372b8: 2023-05-27 12:01:24,313 DEBUG [Listener at localhost/41535] regionserver.HRegion(2446): Flush status journal for 28b1bff25082546958403d48c0632485: 2023-05-27 12:01:24,322 INFO [Listener at localhost/41535] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188884237 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188884313 2023-05-27 12:01:24,322 DEBUG [Listener at localhost/41535] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39927,DS-868ac10b-0f0e-4a6e-8a38-8aaeab3d890d,DISK], DatanodeInfoWithStorage[127.0.0.1:45727,DS-e419e145-668a-46a0-b31d-482de4c73ae4,DISK]] 2023-05-27 12:01:24,322 DEBUG [Listener at localhost/41535] wal.AbstractFSWAL(716): hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188884237 is not closed yet, will try archiving it next time 2023-05-27 12:01:24,326 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188786920 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/oldWALs/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188786920 2023-05-27 12:01:24,327 INFO [Listener at localhost/41535] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-27 12:01:24,328 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188884237 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/oldWALs/jenkins-hbase4.apache.org%2C32953%2C1685188786537.1685188884237 2023-05-27 12:01:24,427 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 12:01:24,427 INFO [Listener at localhost/41535] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 12:01:24,428 DEBUG [Listener at localhost/41535] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x199889da to 127.0.0.1:62142 2023-05-27 12:01:24,428 DEBUG [Listener at localhost/41535] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:24,428 DEBUG [Listener at localhost/41535] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 12:01:24,428 DEBUG [Listener at localhost/41535] util.JVMClusterUtil(257): Found active master hash=789922395, stopped=false 2023-05-27 12:01:24,428 INFO [Listener at localhost/41535] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 12:01:24,430 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 12:01:24,430 INFO [Listener at localhost/41535] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 12:01:24,430 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 12:01:24,430 DEBUG [Listener at localhost/41535] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x46199476 to 127.0.0.1:62142 2023-05-27 12:01:24,430 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:24,431 DEBUG [Listener at localhost/41535] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:24,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 12:01:24,431 INFO [Listener at localhost/41535] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,32953,1685188786537' ***** 2023-05-27 12:01:24,431 INFO [Listener at localhost/41535] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 12:01:24,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 12:01:24,432 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(3303): Received CLOSE for 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(3303): Received CLOSE for 3bb19d8461bcde8cb229756ae66372b8 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(3303): Received CLOSE for 28b1bff25082546958403d48c0632485 2023-05-27 12:01:24,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6e8ea9646b960345acad339e7776ad1d, disabling compactions & flushes 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:01:24,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:24,432 DEBUG [RS:0;jenkins-hbase4:32953] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x781c3e85 to 127.0.0.1:62142 2023-05-27 12:01:24,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:24,432 DEBUG [RS:0;jenkins-hbase4:32953] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:24,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. after waiting 0 ms 2023-05-27 12:01:24,432 INFO [RS:0;jenkins-hbase4:32953] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 12:01:24,433 INFO [RS:0;jenkins-hbase4:32953] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 12:01:24,433 INFO [RS:0;jenkins-hbase4:32953] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 12:01:24,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:24,433 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 12:01:24,433 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-27 12:01:24,433 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1478): Online Regions={6e8ea9646b960345acad339e7776ad1d=TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d., 1588230740=hbase:meta,,1.1588230740, 3bb19d8461bcde8cb229756ae66372b8=hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8., 28b1bff25082546958403d48c0632485=TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.} 2023-05-27 12:01:24,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 12:01:24,433 DEBUG [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1504): Waiting on 1588230740, 28b1bff25082546958403d48c0632485, 3bb19d8461bcde8cb229756ae66372b8, 6e8ea9646b960345acad339e7776ad1d 2023-05-27 12:01:24,434 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 12:01:24,436 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 12:01:24,436 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 12:01:24,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 12:01:24,457 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471->hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544-top, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/7af3f2c5b793462c8de3f1084bda0d4a, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0d90ff1e9c0f4fc2acedd2f811ea307b, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/4c566a3c0cb34dd2ab537f5b1475783d, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/5e693539a4d74a87a2fed63d65d7b7fa, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/af0665bbbe5f475ba89f66cbf29cf9cb, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/3172147a9d8847cca2a1ca33b528f592, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/8acb5f4ac5ba49a58a943d7a8bade99a, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ff3a4aa6af95450391b321c9e3ebb6b7, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0f46dbd95589449f9afdef6cf159f44c, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0b4eb5614617401287b19a33abb28109, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/df9da3f85e5441a2b5462a89539f10a0, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/69e395a2de7a4939afb89cc7c9171f03, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/250f922d260d482bb74345c6082219be, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/955034815dd743aab3c08480c476e6a6, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/915ea3919323445d8af1ada25b53df10] to archive 2023-05-27 12:01:24,458 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 12:01:24,460 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471 2023-05-27 12:01:24,460 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-27 12:01:24,460 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 12:01:24,461 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 12:01:24,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 12:01:24,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 12:01:24,461 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-c670259ddbb04552852b52908a928493 2023-05-27 12:01:24,463 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/7af3f2c5b793462c8de3f1084bda0d4a to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/7af3f2c5b793462c8de3f1084bda0d4a 2023-05-27 12:01:24,464 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/TestLogRolling-testLogRolling=566bdcf139c341c256bf5896c2d70471-8c635b9f9305483383d95ba235addead 2023-05-27 12:01:24,465 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0d90ff1e9c0f4fc2acedd2f811ea307b to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0d90ff1e9c0f4fc2acedd2f811ea307b 2023-05-27 12:01:24,467 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/4c566a3c0cb34dd2ab537f5b1475783d to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/4c566a3c0cb34dd2ab537f5b1475783d 2023-05-27 12:01:24,468 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/5e693539a4d74a87a2fed63d65d7b7fa to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/5e693539a4d74a87a2fed63d65d7b7fa 2023-05-27 12:01:24,469 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/af0665bbbe5f475ba89f66cbf29cf9cb to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/af0665bbbe5f475ba89f66cbf29cf9cb 2023-05-27 12:01:24,470 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/3172147a9d8847cca2a1ca33b528f592 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/3172147a9d8847cca2a1ca33b528f592 2023-05-27 12:01:24,471 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/8acb5f4ac5ba49a58a943d7a8bade99a to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/8acb5f4ac5ba49a58a943d7a8bade99a 2023-05-27 12:01:24,473 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ff3a4aa6af95450391b321c9e3ebb6b7 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/ff3a4aa6af95450391b321c9e3ebb6b7 2023-05-27 12:01:24,474 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0f46dbd95589449f9afdef6cf159f44c to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0f46dbd95589449f9afdef6cf159f44c 2023-05-27 12:01:24,475 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0b4eb5614617401287b19a33abb28109 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/0b4eb5614617401287b19a33abb28109 2023-05-27 12:01:24,476 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/df9da3f85e5441a2b5462a89539f10a0 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/df9da3f85e5441a2b5462a89539f10a0 2023-05-27 12:01:24,478 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/69e395a2de7a4939afb89cc7c9171f03 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/69e395a2de7a4939afb89cc7c9171f03 2023-05-27 12:01:24,479 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/250f922d260d482bb74345c6082219be to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/250f922d260d482bb74345c6082219be 2023-05-27 12:01:24,480 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/955034815dd743aab3c08480c476e6a6 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/955034815dd743aab3c08480c476e6a6 2023-05-27 12:01:24,481 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/915ea3919323445d8af1ada25b53df10 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/info/915ea3919323445d8af1ada25b53df10 2023-05-27 12:01:24,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/6e8ea9646b960345acad339e7776ad1d/recovered.edits/328.seqid, newMaxSeqId=328, maxSeqId=121 2023-05-27 12:01:24,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:24,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6e8ea9646b960345acad339e7776ad1d: 2023-05-27 12:01:24,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685188801755.6e8ea9646b960345acad339e7776ad1d. 2023-05-27 12:01:24,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3bb19d8461bcde8cb229756ae66372b8, disabling compactions & flushes 2023-05-27 12:01:24,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 12:01:24,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 12:01:24,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. after waiting 0 ms 2023-05-27 12:01:24,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 12:01:24,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/hbase/namespace/3bb19d8461bcde8cb229756ae66372b8/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 12:01:24,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 12:01:24,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3bb19d8461bcde8cb229756ae66372b8: 2023-05-27 12:01:24,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685188787082.3bb19d8461bcde8cb229756ae66372b8. 2023-05-27 12:01:24,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 28b1bff25082546958403d48c0632485, disabling compactions & flushes 2023-05-27 12:01:24,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:01:24,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:01:24,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. after waiting 0 ms 2023-05-27 12:01:24,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:01:24,496 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471->hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/566bdcf139c341c256bf5896c2d70471/info/ef7cee17d7224b10a7f0db368f9b3544-bottom] to archive 2023-05-27 12:01:24,496 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 12:01:24,498 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471 to hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/archive/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/info/ef7cee17d7224b10a7f0db368f9b3544.566bdcf139c341c256bf5896c2d70471 2023-05-27 12:01:24,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/data/default/TestLogRolling-testLogRolling/28b1bff25082546958403d48c0632485/recovered.edits/126.seqid, newMaxSeqId=126, maxSeqId=121 2023-05-27 12:01:24,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:01:24,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 28b1bff25082546958403d48c0632485: 2023-05-27 12:01:24,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685188801755.28b1bff25082546958403d48c0632485. 2023-05-27 12:01:24,634 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32953,1685188786537; all regions closed. 2023-05-27 12:01:24,635 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:01:24,640 DEBUG [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/oldWALs 2023-05-27 12:01:24,640 INFO [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C32953%2C1685188786537.meta:.meta(num 1685188787025) 2023-05-27 12:01:24,641 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/WALs/jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:01:24,645 DEBUG [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/oldWALs 2023-05-27 12:01:24,646 INFO [RS:0;jenkins-hbase4:32953] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C32953%2C1685188786537:(num 1685188884313) 2023-05-27 12:01:24,646 DEBUG [RS:0;jenkins-hbase4:32953] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:24,646 INFO [RS:0;jenkins-hbase4:32953] regionserver.LeaseManager(133): Closed leases 2023-05-27 12:01:24,646 INFO [RS:0;jenkins-hbase4:32953] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-27 12:01:24,646 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 12:01:24,647 INFO [RS:0;jenkins-hbase4:32953] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32953 2023-05-27 12:01:24,650 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 12:01:24,650 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32953,1685188786537 2023-05-27 12:01:24,650 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 12:01:24,651 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32953,1685188786537] 2023-05-27 12:01:24,651 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32953,1685188786537; numProcessing=1 2023-05-27 12:01:24,653 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32953,1685188786537 already deleted, retry=false 2023-05-27 12:01:24,653 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32953,1685188786537 expired; onlineServers=0 2023-05-27 12:01:24,653 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46025,1685188786487' ***** 2023-05-27 12:01:24,653 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 12:01:24,654 DEBUG [M:0;jenkins-hbase4:46025] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b2ac361, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 12:01:24,654 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 12:01:24,654 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46025,1685188786487; all regions closed. 2023-05-27 12:01:24,654 DEBUG [M:0;jenkins-hbase4:46025] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:24,654 DEBUG [M:0;jenkins-hbase4:46025] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 12:01:24,654 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 12:01:24,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188786668] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188786668,5,FailOnTimeoutGroup] 2023-05-27 12:01:24,654 DEBUG [M:0;jenkins-hbase4:46025] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 12:01:24,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188786668] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188786668,5,FailOnTimeoutGroup] 2023-05-27 12:01:24,655 INFO [M:0;jenkins-hbase4:46025] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 12:01:24,655 INFO [M:0;jenkins-hbase4:46025] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 12:01:24,656 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 12:01:24,656 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:24,656 INFO [M:0;jenkins-hbase4:46025] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 12:01:24,656 DEBUG [M:0;jenkins-hbase4:46025] master.HMaster(1512): Stopping service threads 2023-05-27 12:01:24,656 INFO [M:0;jenkins-hbase4:46025] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 12:01:24,656 ERROR [M:0;jenkins-hbase4:46025] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 12:01:24,656 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 12:01:24,656 INFO [M:0;jenkins-hbase4:46025] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 12:01:24,656 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 12:01:24,657 DEBUG [M:0;jenkins-hbase4:46025] zookeeper.ZKUtil(398): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 12:01:24,657 WARN [M:0;jenkins-hbase4:46025] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 12:01:24,657 INFO [M:0;jenkins-hbase4:46025] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 12:01:24,657 INFO [M:0;jenkins-hbase4:46025] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 12:01:24,657 DEBUG [M:0;jenkins-hbase4:46025] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 12:01:24,657 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:24,657 DEBUG [M:0;jenkins-hbase4:46025] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:24,657 DEBUG [M:0;jenkins-hbase4:46025] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 12:01:24,657 DEBUG [M:0;jenkins-hbase4:46025] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:24,657 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.69 KB heapSize=78.42 KB 2023-05-27 12:01:24,671 INFO [M:0;jenkins-hbase4:46025] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.69 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f49e680bcc4a4c09bcb6f0ef2f7b60bf 2023-05-27 12:01:24,675 INFO [M:0;jenkins-hbase4:46025] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f49e680bcc4a4c09bcb6f0ef2f7b60bf 2023-05-27 12:01:24,676 DEBUG [M:0;jenkins-hbase4:46025] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f49e680bcc4a4c09bcb6f0ef2f7b60bf as hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f49e680bcc4a4c09bcb6f0ef2f7b60bf 2023-05-27 12:01:24,681 INFO [M:0;jenkins-hbase4:46025] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f49e680bcc4a4c09bcb6f0ef2f7b60bf 2023-05-27 12:01:24,681 INFO [M:0;jenkins-hbase4:46025] regionserver.HStore(1080): Added hdfs://localhost:40147/user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f49e680bcc4a4c09bcb6f0ef2f7b60bf, entries=18, sequenceid=160, filesize=6.9 K 2023-05-27 12:01:24,682 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegion(2948): Finished flush of dataSize ~64.69 KB/66244, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=160, compaction requested=false 2023-05-27 12:01:24,683 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:24,683 DEBUG [M:0;jenkins-hbase4:46025] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 12:01:24,683 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/adec378f-f1ba-3d4f-b1b6-21add126cf62/MasterData/WALs/jenkins-hbase4.apache.org,46025,1685188786487 2023-05-27 12:01:24,686 INFO [M:0;jenkins-hbase4:46025] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 12:01:24,686 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 12:01:24,687 INFO [M:0;jenkins-hbase4:46025] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46025 2023-05-27 12:01:24,689 DEBUG [M:0;jenkins-hbase4:46025] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46025,1685188786487 already deleted, retry=false 2023-05-27 12:01:24,751 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:24,751 INFO [RS:0;jenkins-hbase4:32953] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32953,1685188786537; zookeeper connection closed. 2023-05-27 12:01:24,751 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): regionserver:32953-0x1006c82bc290001, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:24,752 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@30ef7757] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@30ef7757 2023-05-27 12:01:24,752 INFO [Listener at localhost/41535] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 12:01:24,799 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 12:01:24,851 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:24,851 INFO [M:0;jenkins-hbase4:46025] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46025,1685188786487; zookeeper connection closed. 2023-05-27 12:01:24,851 DEBUG [Listener at localhost/41535-EventThread] zookeeper.ZKWatcher(600): master:46025-0x1006c82bc290000, quorum=127.0.0.1:62142, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:24,852 WARN [Listener at localhost/41535] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 12:01:24,856 INFO [Listener at localhost/41535] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 12:01:24,961 WARN [BP-886764506-172.31.14.131-1685188785953 heartbeating to localhost/127.0.0.1:40147] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 12:01:24,961 WARN [BP-886764506-172.31.14.131-1685188785953 heartbeating to localhost/127.0.0.1:40147] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-886764506-172.31.14.131-1685188785953 (Datanode Uuid 626af19d-6512-4704-8725-f56f5210ff30) service to localhost/127.0.0.1:40147 2023-05-27 12:01:24,962 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/dfs/data/data3/current/BP-886764506-172.31.14.131-1685188785953] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:24,962 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/dfs/data/data4/current/BP-886764506-172.31.14.131-1685188785953] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:24,964 WARN [Listener at localhost/41535] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 12:01:24,967 INFO [Listener at localhost/41535] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 12:01:25,073 WARN [BP-886764506-172.31.14.131-1685188785953 heartbeating to localhost/127.0.0.1:40147] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 12:01:25,073 WARN [BP-886764506-172.31.14.131-1685188785953 heartbeating to localhost/127.0.0.1:40147] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-886764506-172.31.14.131-1685188785953 (Datanode Uuid b78a79ee-7470-4cc5-94fe-d9719490e414) service to localhost/127.0.0.1:40147 2023-05-27 12:01:25,073 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/dfs/data/data1/current/BP-886764506-172.31.14.131-1685188785953] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:25,074 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/cluster_76099d46-217a-c2a6-6b7a-500668283d27/dfs/data/data2/current/BP-886764506-172.31.14.131-1685188785953] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:25,086 INFO [Listener at localhost/41535] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 12:01:25,202 INFO [Listener at localhost/41535] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 12:01:25,232 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 12:01:25,242 INFO [Listener at localhost/41535] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 96) - Thread LEAK? -, OpenFileDescriptor=534 (was 500) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=39 (was 46), ProcessCount=169 (was 169), AvailableMemoryMB=3182 (was 3469) 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=534, MaxFileDescriptor=60000, SystemLoadAverage=39, ProcessCount=169, AvailableMemoryMB=3182 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/hadoop.log.dir so I do NOT create it in target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d4dec68a-e902-fefc-16ba-71becd68cab5/hadoop.tmp.dir so I do NOT create it in target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9, deleteOnExit=true 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/test.cache.data in system properties and HBase conf 2023-05-27 12:01:25,251 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/hadoop.log.dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 12:01:25,252 DEBUG [Listener at localhost/41535] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 12:01:25,252 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/nfs.dump.dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/java.io.tmpdir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 12:01:25,253 INFO [Listener at localhost/41535] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 12:01:25,255 WARN [Listener at localhost/41535] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 12:01:25,258 WARN [Listener at localhost/41535] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 12:01:25,258 WARN [Listener at localhost/41535] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 12:01:25,296 WARN [Listener at localhost/41535] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 12:01:25,298 INFO [Listener at localhost/41535] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 12:01:25,302 INFO [Listener at localhost/41535] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/java.io.tmpdir/Jetty_localhost_41207_hdfs____.8wcny2/webapp 2023-05-27 12:01:25,392 INFO [Listener at localhost/41535] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41207 2023-05-27 12:01:25,393 WARN [Listener at localhost/41535] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 12:01:25,396 WARN [Listener at localhost/41535] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 12:01:25,396 WARN [Listener at localhost/41535] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 12:01:25,435 WARN [Listener at localhost/36327] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 12:01:25,449 WARN [Listener at localhost/36327] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 12:01:25,451 WARN [Listener at localhost/36327] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 12:01:25,452 INFO [Listener at localhost/36327] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 12:01:25,456 INFO [Listener at localhost/36327] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/java.io.tmpdir/Jetty_localhost_46061_datanode____.lia06o/webapp 2023-05-27 12:01:25,546 INFO [Listener at localhost/36327] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46061 2023-05-27 12:01:25,553 WARN [Listener at localhost/39041] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 12:01:25,564 WARN [Listener at localhost/39041] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 12:01:25,566 WARN [Listener at localhost/39041] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 12:01:25,567 INFO [Listener at localhost/39041] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 12:01:25,569 INFO [Listener at localhost/39041] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/java.io.tmpdir/Jetty_localhost_46063_datanode____.v7nalq/webapp 2023-05-27 12:01:25,655 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa020de4fa5d841f7: Processing first storage report for DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422 from datanode 0dc57100-f4af-4324-bd94-6e4ff691077c 2023-05-27 12:01:25,655 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa020de4fa5d841f7: from storage DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422 node DatanodeRegistration(127.0.0.1:39053, datanodeUuid=0dc57100-f4af-4324-bd94-6e4ff691077c, infoPort=46497, infoSecurePort=0, ipcPort=39041, storageInfo=lv=-57;cid=testClusterID;nsid=1471264182;c=1685188885260), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 12:01:25,655 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa020de4fa5d841f7: Processing first storage report for DS-acff9604-3fe3-4af0-b5ee-aa51cba13b8b from datanode 0dc57100-f4af-4324-bd94-6e4ff691077c 2023-05-27 12:01:25,655 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa020de4fa5d841f7: from storage DS-acff9604-3fe3-4af0-b5ee-aa51cba13b8b node DatanodeRegistration(127.0.0.1:39053, datanodeUuid=0dc57100-f4af-4324-bd94-6e4ff691077c, infoPort=46497, infoSecurePort=0, ipcPort=39041, storageInfo=lv=-57;cid=testClusterID;nsid=1471264182;c=1685188885260), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 12:01:25,660 INFO [Listener at localhost/39041] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46063 2023-05-27 12:01:25,666 WARN [Listener at localhost/39827] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 12:01:25,747 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa7c7ea6afdede21b: Processing first storage report for DS-1161064d-a36f-4e60-b4e0-7370549c78f2 from datanode e2032bd9-68a6-40b1-9fcb-f0f7f7fc81b4 2023-05-27 12:01:25,748 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa7c7ea6afdede21b: from storage DS-1161064d-a36f-4e60-b4e0-7370549c78f2 node DatanodeRegistration(127.0.0.1:36039, datanodeUuid=e2032bd9-68a6-40b1-9fcb-f0f7f7fc81b4, infoPort=41395, infoSecurePort=0, ipcPort=39827, storageInfo=lv=-57;cid=testClusterID;nsid=1471264182;c=1685188885260), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 12:01:25,748 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa7c7ea6afdede21b: Processing first storage report for DS-bd3addc4-fc31-4e9f-9652-3eed7097e4ff from datanode e2032bd9-68a6-40b1-9fcb-f0f7f7fc81b4 2023-05-27 12:01:25,748 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa7c7ea6afdede21b: from storage DS-bd3addc4-fc31-4e9f-9652-3eed7097e4ff node DatanodeRegistration(127.0.0.1:36039, datanodeUuid=e2032bd9-68a6-40b1-9fcb-f0f7f7fc81b4, infoPort=41395, infoSecurePort=0, ipcPort=39827, storageInfo=lv=-57;cid=testClusterID;nsid=1471264182;c=1685188885260), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 12:01:25,772 DEBUG [Listener at localhost/39827] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21 2023-05-27 12:01:25,774 INFO [Listener at localhost/39827] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/zookeeper_0, clientPort=55033, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 12:01:25,775 INFO [Listener at localhost/39827] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55033 2023-05-27 12:01:25,775 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,776 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,788 INFO [Listener at localhost/39827] util.FSUtils(471): Created version file at hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b with version=8 2023-05-27 12:01:25,788 INFO [Listener at localhost/39827] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43439/user/jenkins/test-data/23dcbf3a-ba1f-fdee-45aa-d087a92c0264/hbase-staging 2023-05-27 12:01:25,790 INFO [Listener at localhost/39827] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 12:01:25,790 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 12:01:25,790 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 12:01:25,790 INFO [Listener at localhost/39827] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 12:01:25,791 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 12:01:25,791 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 12:01:25,791 INFO [Listener at localhost/39827] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 12:01:25,792 INFO [Listener at localhost/39827] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37349 2023-05-27 12:01:25,793 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,793 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,794 INFO [Listener at localhost/39827] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37349 connecting to ZooKeeper ensemble=127.0.0.1:55033 2023-05-27 12:01:25,800 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:373490x0, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 12:01:25,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37349-0x1006c8440100000 connected 2023-05-27 12:01:25,813 DEBUG [Listener at localhost/39827] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 12:01:25,813 DEBUG [Listener at localhost/39827] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 12:01:25,814 DEBUG [Listener at localhost/39827] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 12:01:25,814 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37349 2023-05-27 12:01:25,814 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37349 2023-05-27 12:01:25,814 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37349 2023-05-27 12:01:25,815 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37349 2023-05-27 12:01:25,815 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37349 2023-05-27 12:01:25,815 INFO [Listener at localhost/39827] master.HMaster(444): hbase.rootdir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b, hbase.cluster.distributed=false 2023-05-27 12:01:25,831 INFO [Listener at localhost/39827] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 12:01:25,831 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 12:01:25,832 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 12:01:25,832 INFO [Listener at localhost/39827] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 12:01:25,832 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 12:01:25,832 INFO [Listener at localhost/39827] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 12:01:25,832 INFO [Listener at localhost/39827] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 12:01:25,833 INFO [Listener at localhost/39827] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36915 2023-05-27 12:01:25,834 INFO [Listener at localhost/39827] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 12:01:25,835 DEBUG [Listener at localhost/39827] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 12:01:25,835 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,837 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,838 INFO [Listener at localhost/39827] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36915 connecting to ZooKeeper ensemble=127.0.0.1:55033 2023-05-27 12:01:25,840 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:369150x0, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 12:01:25,842 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36915-0x1006c8440100001 connected 2023-05-27 12:01:25,842 DEBUG [Listener at localhost/39827] zookeeper.ZKUtil(164): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 12:01:25,842 DEBUG [Listener at localhost/39827] zookeeper.ZKUtil(164): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 12:01:25,843 DEBUG [Listener at localhost/39827] zookeeper.ZKUtil(164): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 12:01:25,843 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36915 2023-05-27 12:01:25,845 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36915 2023-05-27 12:01:25,846 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36915 2023-05-27 12:01:25,846 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36915 2023-05-27 12:01:25,847 DEBUG [Listener at localhost/39827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36915 2023-05-27 12:01:25,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,849 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 12:01:25,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,851 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 12:01:25,851 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 12:01:25,851 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:25,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 12:01:25,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37349,1685188885789 from backup master directory 2023-05-27 12:01:25,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 12:01:25,854 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,854 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 12:01:25,854 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 12:01:25,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/hbase.id with ID: 5b293821-cb99-4a9f-8ce3-d5be30c58918 2023-05-27 12:01:25,874 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:25,876 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:25,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2e52a257 to 127.0.0.1:55033 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 12:01:25,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@385b99f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 12:01:25,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 12:01:25,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 12:01:25,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 12:01:25,889 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store-tmp 2023-05-27 12:01:25,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:01:25,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 12:01:25,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:25,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:25,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 12:01:25,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:25,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:25,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 12:01:25,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/WALs/jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37349%2C1685188885789, suffix=, logDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/WALs/jenkins-hbase4.apache.org,37349,1685188885789, archiveDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/oldWALs, maxLogs=10 2023-05-27 12:01:25,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/WALs/jenkins-hbase4.apache.org,37349,1685188885789/jenkins-hbase4.apache.org%2C37349%2C1685188885789.1685188885897 2023-05-27 12:01:25,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39053,DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422,DISK], DatanodeInfoWithStorage[127.0.0.1:36039,DS-1161064d-a36f-4e60-b4e0-7370549c78f2,DISK]] 2023-05-27 12:01:25,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 12:01:25,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:01:25,902 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 12:01:25,902 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 12:01:25,903 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 12:01:25,904 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 12:01:25,905 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 12:01:25,905 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:25,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 12:01:25,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 12:01:25,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 12:01:25,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 12:01:25,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=862338, jitterRate=0.09651961922645569}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 12:01:25,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 12:01:25,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 12:01:25,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 12:01:25,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 12:01:25,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 12:01:25,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 12:01:25,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 12:01:25,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 12:01:25,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 12:01:25,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 12:01:25,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 12:01:25,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 12:01:25,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 12:01:25,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 12:01:25,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 12:01:25,926 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:25,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 12:01:25,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 12:01:25,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 12:01:25,930 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 12:01:25,930 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 12:01:25,930 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:25,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37349,1685188885789, sessionid=0x1006c8440100000, setting cluster-up flag (Was=false) 2023-05-27 12:01:25,935 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:25,943 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 12:01:25,944 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,947 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:25,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 12:01:25,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:25,953 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/.hbase-snapshot/.tmp 2023-05-27 12:01:25,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 12:01:25,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 12:01:25,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:25,957 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685188915957 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 12:01:25,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:25,958 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 12:01:25,958 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 12:01:25,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 12:01:25,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 12:01:25,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 12:01:25,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 12:01:25,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 12:01:25,960 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 12:01:25,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188885959,5,FailOnTimeoutGroup] 2023-05-27 12:01:25,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188885966,5,FailOnTimeoutGroup] 2023-05-27 12:01:25,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:25,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 12:01:25,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:25,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:25,972 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 12:01:25,972 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 12:01:25,973 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b 2023-05-27 12:01:25,978 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:01:25,979 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 12:01:25,980 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/info 2023-05-27 12:01:25,981 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 12:01:25,981 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:25,981 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 12:01:25,982 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/rep_barrier 2023-05-27 12:01:25,982 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 12:01:25,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:25,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 12:01:25,984 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/table 2023-05-27 12:01:25,984 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 12:01:25,985 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:25,985 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740 2023-05-27 12:01:25,986 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740 2023-05-27 12:01:25,987 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 12:01:25,988 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 12:01:25,990 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 12:01:25,990 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=745313, jitterRate=-0.05228564143180847}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 12:01:25,990 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 12:01:25,990 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 12:01:25,990 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 12:01:25,990 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 12:01:25,990 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 12:01:25,990 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 12:01:25,991 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 12:01:25,991 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 12:01:25,992 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 12:01:25,992 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 12:01:25,992 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 12:01:25,993 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 12:01:25,994 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 12:01:26,048 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(951): ClusterId : 5b293821-cb99-4a9f-8ce3-d5be30c58918 2023-05-27 12:01:26,049 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 12:01:26,051 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 12:01:26,051 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 12:01:26,054 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 12:01:26,055 DEBUG [RS:0;jenkins-hbase4:36915] zookeeper.ReadOnlyZKClient(139): Connect 0x16ccd573 to 127.0.0.1:55033 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 12:01:26,058 DEBUG [RS:0;jenkins-hbase4:36915] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@180b39da, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 12:01:26,059 DEBUG [RS:0;jenkins-hbase4:36915] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c34be03, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 12:01:26,067 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36915 2023-05-27 12:01:26,067 INFO [RS:0;jenkins-hbase4:36915] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 12:01:26,067 INFO [RS:0;jenkins-hbase4:36915] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 12:01:26,067 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 12:01:26,068 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,37349,1685188885789 with isa=jenkins-hbase4.apache.org/172.31.14.131:36915, startcode=1685188885831 2023-05-27 12:01:26,068 DEBUG [RS:0;jenkins-hbase4:36915] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 12:01:26,071 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56227, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 12:01:26,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37349] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,072 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b 2023-05-27 12:01:26,072 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36327 2023-05-27 12:01:26,072 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 12:01:26,074 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 12:01:26,074 DEBUG [RS:0;jenkins-hbase4:36915] zookeeper.ZKUtil(162): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,074 WARN [RS:0;jenkins-hbase4:36915] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 12:01:26,074 INFO [RS:0;jenkins-hbase4:36915] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 12:01:26,074 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1946): logDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,075 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36915,1685188885831] 2023-05-27 12:01:26,078 DEBUG [RS:0;jenkins-hbase4:36915] zookeeper.ZKUtil(162): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,079 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 12:01:26,079 INFO [RS:0;jenkins-hbase4:36915] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 12:01:26,080 INFO [RS:0;jenkins-hbase4:36915] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 12:01:26,080 INFO [RS:0;jenkins-hbase4:36915] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 12:01:26,080 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,081 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 12:01:26,082 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,082 DEBUG [RS:0;jenkins-hbase4:36915] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 12:01:26,083 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,083 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,084 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,094 INFO [RS:0;jenkins-hbase4:36915] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 12:01:26,094 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36915,1685188885831-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,104 INFO [RS:0;jenkins-hbase4:36915] regionserver.Replication(203): jenkins-hbase4.apache.org,36915,1685188885831 started 2023-05-27 12:01:26,104 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36915,1685188885831, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36915, sessionid=0x1006c8440100001 2023-05-27 12:01:26,104 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 12:01:26,104 DEBUG [RS:0;jenkins-hbase4:36915] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,104 DEBUG [RS:0;jenkins-hbase4:36915] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36915,1685188885831' 2023-05-27 12:01:26,104 DEBUG [RS:0;jenkins-hbase4:36915] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36915,1685188885831' 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 12:01:26,105 DEBUG [RS:0;jenkins-hbase4:36915] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 12:01:26,106 DEBUG [RS:0;jenkins-hbase4:36915] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 12:01:26,106 INFO [RS:0;jenkins-hbase4:36915] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 12:01:26,106 INFO [RS:0;jenkins-hbase4:36915] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 12:01:26,144 DEBUG [jenkins-hbase4:37349] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 12:01:26,145 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36915,1685188885831, state=OPENING 2023-05-27 12:01:26,146 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 12:01:26,148 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:26,148 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 12:01:26,149 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36915,1685188885831}] 2023-05-27 12:01:26,207 INFO [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36915%2C1685188885831, suffix=, logDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831, archiveDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs, maxLogs=32 2023-05-27 12:01:26,215 INFO [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831/jenkins-hbase4.apache.org%2C36915%2C1685188885831.1685188886208 2023-05-27 12:01:26,215 DEBUG [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36039,DS-1161064d-a36f-4e60-b4e0-7370549c78f2,DISK], DatanodeInfoWithStorage[127.0.0.1:39053,DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422,DISK]] 2023-05-27 12:01:26,302 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,302 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 12:01:26,305 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33178, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 12:01:26,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 12:01:26,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 12:01:26,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36915%2C1685188885831.meta, suffix=.meta, logDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831, archiveDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs, maxLogs=32 2023-05-27 12:01:26,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831/jenkins-hbase4.apache.org%2C36915%2C1685188885831.meta.1685188886310.meta 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39053,DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422,DISK], DatanodeInfoWithStorage[127.0.0.1:36039,DS-1161064d-a36f-4e60-b4e0-7370549c78f2,DISK]] 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 12:01:26,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 12:01:26,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 12:01:26,316 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 12:01:26,317 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/info 2023-05-27 12:01:26,317 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/info 2023-05-27 12:01:26,318 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 12:01:26,318 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:26,318 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 12:01:26,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/rep_barrier 2023-05-27 12:01:26,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/rep_barrier 2023-05-27 12:01:26,319 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 12:01:26,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:26,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 12:01:26,320 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/table 2023-05-27 12:01:26,320 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/table 2023-05-27 12:01:26,320 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 12:01:26,321 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:26,321 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740 2023-05-27 12:01:26,322 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740 2023-05-27 12:01:26,324 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 12:01:26,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 12:01:26,326 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=861751, jitterRate=0.09577381610870361}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 12:01:26,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 12:01:26,328 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685188886302 2023-05-27 12:01:26,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 12:01:26,332 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 12:01:26,332 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36915,1685188885831, state=OPEN 2023-05-27 12:01:26,334 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 12:01:26,334 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 12:01:26,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 12:01:26,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36915,1685188885831 in 186 msec 2023-05-27 12:01:26,337 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 12:01:26,337 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 344 msec 2023-05-27 12:01:26,339 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 384 msec 2023-05-27 12:01:26,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685188886339, completionTime=-1 2023-05-27 12:01:26,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 12:01:26,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 12:01:26,341 DEBUG [hconnection-0x74733162-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 12:01:26,343 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33184, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 12:01:26,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 12:01:26,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685188946344 2023-05-27 12:01:26,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685189006344 2023-05-27 12:01:26,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37349,1685188885789-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37349,1685188885789-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37349,1685188885789-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37349, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 12:01:26,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 12:01:26,350 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 12:01:26,350 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 12:01:26,351 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 12:01:26,352 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 12:01:26,353 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/.tmp/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,354 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/.tmp/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6 empty. 2023-05-27 12:01:26,354 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/.tmp/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,354 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 12:01:26,363 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 12:01:26,364 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b9e83af31985db6c1546bd630d639af6, NAME => 'hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/.tmp 2023-05-27 12:01:26,370 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:01:26,370 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b9e83af31985db6c1546bd630d639af6, disabling compactions & flushes 2023-05-27 12:01:26,370 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,370 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,370 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. after waiting 0 ms 2023-05-27 12:01:26,370 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,370 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,371 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b9e83af31985db6c1546bd630d639af6: 2023-05-27 12:01:26,373 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 12:01:26,373 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188886373"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685188886373"}]},"ts":"1685188886373"} 2023-05-27 12:01:26,376 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 12:01:26,376 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 12:01:26,377 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188886376"}]},"ts":"1685188886376"} 2023-05-27 12:01:26,378 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 12:01:26,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b9e83af31985db6c1546bd630d639af6, ASSIGN}] 2023-05-27 12:01:26,391 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b9e83af31985db6c1546bd630d639af6, ASSIGN 2023-05-27 12:01:26,392 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b9e83af31985db6c1546bd630d639af6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36915,1685188885831; forceNewPlan=false, retain=false 2023-05-27 12:01:26,543 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b9e83af31985db6c1546bd630d639af6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,543 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188886543"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685188886543"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685188886543"}]},"ts":"1685188886543"} 2023-05-27 12:01:26,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure b9e83af31985db6c1546bd630d639af6, server=jenkins-hbase4.apache.org,36915,1685188885831}] 2023-05-27 12:01:26,700 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9e83af31985db6c1546bd630d639af6, NAME => 'hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.', STARTKEY => '', ENDKEY => ''} 2023-05-27 12:01:26,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 12:01:26,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,701 INFO [StoreOpener-b9e83af31985db6c1546bd630d639af6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,703 DEBUG [StoreOpener-b9e83af31985db6c1546bd630d639af6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/info 2023-05-27 12:01:26,703 DEBUG [StoreOpener-b9e83af31985db6c1546bd630d639af6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/info 2023-05-27 12:01:26,703 INFO [StoreOpener-b9e83af31985db6c1546bd630d639af6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9e83af31985db6c1546bd630d639af6 columnFamilyName info 2023-05-27 12:01:26,704 INFO [StoreOpener-b9e83af31985db6c1546bd630d639af6-1] regionserver.HStore(310): Store=b9e83af31985db6c1546bd630d639af6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 12:01:26,704 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,705 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,707 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 12:01:26,709 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b9e83af31985db6c1546bd630d639af6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=749140, jitterRate=-0.04741951823234558}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 12:01:26,710 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b9e83af31985db6c1546bd630d639af6: 2023-05-27 12:01:26,711 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6., pid=6, masterSystemTime=1685188886697 2023-05-27 12:01:26,713 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,713 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,714 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b9e83af31985db6c1546bd630d639af6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,714 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685188886713"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685188886713"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685188886713"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685188886713"}]},"ts":"1685188886713"} 2023-05-27 12:01:26,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 12:01:26,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure b9e83af31985db6c1546bd630d639af6, server=jenkins-hbase4.apache.org,36915,1685188885831 in 171 msec 2023-05-27 12:01:26,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 12:01:26,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b9e83af31985db6c1546bd630d639af6, ASSIGN in 329 msec 2023-05-27 12:01:26,719 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 12:01:26,720 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685188886719"}]},"ts":"1685188886719"} 2023-05-27 12:01:26,721 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 12:01:26,723 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 12:01:26,724 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 374 msec 2023-05-27 12:01:26,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 12:01:26,752 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 12:01:26,752 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:26,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 12:01:26,763 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 12:01:26,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-05-27 12:01:26,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 12:01:26,783 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 12:01:26,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-27 12:01:26,792 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 12:01:26,794 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 12:01:26,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.940sec 2023-05-27 12:01:26,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 12:01:26,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 12:01:26,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 12:01:26,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37349,1685188885789-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 12:01:26,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37349,1685188885789-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 12:01:26,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 12:01:26,849 DEBUG [Listener at localhost/39827] zookeeper.ReadOnlyZKClient(139): Connect 0x0f9db487 to 127.0.0.1:55033 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 12:01:26,854 DEBUG [Listener at localhost/39827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@aab1897, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 12:01:26,855 DEBUG [hconnection-0x73254cbc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 12:01:26,859 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33196, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 12:01:26,860 INFO [Listener at localhost/39827] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:26,860 INFO [Listener at localhost/39827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 12:01:26,863 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 12:01:26,863 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:26,864 INFO [Listener at localhost/39827] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 12:01:26,864 INFO [Listener at localhost/39827] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 12:01:26,866 INFO [Listener at localhost/39827] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1, archiveDir=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs, maxLogs=32 2023-05-27 12:01:26,870 INFO [Listener at localhost/39827] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1/test.com%2C8080%2C1.1685188886866 2023-05-27 12:01:26,870 DEBUG [Listener at localhost/39827] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36039,DS-1161064d-a36f-4e60-b4e0-7370549c78f2,DISK], DatanodeInfoWithStorage[127.0.0.1:39053,DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422,DISK]] 2023-05-27 12:01:26,876 INFO [Listener at localhost/39827] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1/test.com%2C8080%2C1.1685188886866 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1/test.com%2C8080%2C1.1685188886870 2023-05-27 12:01:26,877 DEBUG [Listener at localhost/39827] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36039,DS-1161064d-a36f-4e60-b4e0-7370549c78f2,DISK], DatanodeInfoWithStorage[127.0.0.1:39053,DS-0d054c27-6464-4a3e-95c7-2e2b5a9a5422,DISK]] 2023-05-27 12:01:26,877 DEBUG [Listener at localhost/39827] wal.AbstractFSWAL(716): hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1/test.com%2C8080%2C1.1685188886866 is not closed yet, will try archiving it next time 2023-05-27 12:01:26,877 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1 2023-05-27 12:01:26,884 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/test.com,8080,1/test.com%2C8080%2C1.1685188886866 to hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs/test.com%2C8080%2C1.1685188886866 2023-05-27 12:01:26,887 DEBUG [Listener at localhost/39827] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs 2023-05-27 12:01:26,887 INFO [Listener at localhost/39827] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685188886870) 2023-05-27 12:01:26,887 INFO [Listener at localhost/39827] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 12:01:26,887 DEBUG [Listener at localhost/39827] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0f9db487 to 127.0.0.1:55033 2023-05-27 12:01:26,887 DEBUG [Listener at localhost/39827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:26,888 DEBUG [Listener at localhost/39827] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 12:01:26,888 DEBUG [Listener at localhost/39827] util.JVMClusterUtil(257): Found active master hash=2088531870, stopped=false 2023-05-27 12:01:26,888 INFO [Listener at localhost/39827] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:26,891 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 12:01:26,891 INFO [Listener at localhost/39827] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 12:01:26,891 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 12:01:26,891 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:26,891 DEBUG [Listener at localhost/39827] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e52a257 to 127.0.0.1:55033 2023-05-27 12:01:26,892 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 12:01:26,892 DEBUG [Listener at localhost/39827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:26,892 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 12:01:26,892 INFO [Listener at localhost/39827] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36915,1685188885831' ***** 2023-05-27 12:01:26,892 INFO [Listener at localhost/39827] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 12:01:26,892 INFO [RS:0;jenkins-hbase4:36915] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 12:01:26,892 INFO [RS:0;jenkins-hbase4:36915] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 12:01:26,892 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 12:01:26,892 INFO [RS:0;jenkins-hbase4:36915] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 12:01:26,893 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(3303): Received CLOSE for b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,893 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:26,893 DEBUG [RS:0;jenkins-hbase4:36915] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x16ccd573 to 127.0.0.1:55033 2023-05-27 12:01:26,893 DEBUG [RS:0;jenkins-hbase4:36915] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:26,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b9e83af31985db6c1546bd630d639af6, disabling compactions & flushes 2023-05-27 12:01:26,893 INFO [RS:0;jenkins-hbase4:36915] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 12:01:26,893 INFO [RS:0;jenkins-hbase4:36915] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 12:01:26,893 INFO [RS:0;jenkins-hbase4:36915] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 12:01:26,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,894 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. after waiting 0 ms 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b9e83af31985db6c1546bd630d639af6 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 12:01:26,894 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-27 12:01:26,894 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, b9e83af31985db6c1546bd630d639af6=hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6.} 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 12:01:26,894 DEBUG [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1504): Waiting on 1588230740, b9e83af31985db6c1546bd630d639af6 2023-05-27 12:01:26,894 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 12:01:26,894 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 12:01:26,894 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-27 12:01:26,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/.tmp/info/8e75186e2c544fe7bac4da13b0145e58 2023-05-27 12:01:26,907 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/.tmp/info/77737bd596454ac9ba52a23a7ee8e9aa 2023-05-27 12:01:26,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/.tmp/info/8e75186e2c544fe7bac4da13b0145e58 as hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/info/8e75186e2c544fe7bac4da13b0145e58 2023-05-27 12:01:26,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/info/8e75186e2c544fe7bac4da13b0145e58, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 12:01:26,920 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/.tmp/table/dd454e5f750a4825aefa6108a08881f8 2023-05-27 12:01:26,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for b9e83af31985db6c1546bd630d639af6 in 26ms, sequenceid=6, compaction requested=false 2023-05-27 12:01:26,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/namespace/b9e83af31985db6c1546bd630d639af6/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 12:01:26,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b9e83af31985db6c1546bd630d639af6: 2023-05-27 12:01:26,926 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/.tmp/info/77737bd596454ac9ba52a23a7ee8e9aa as hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/info/77737bd596454ac9ba52a23a7ee8e9aa 2023-05-27 12:01:26,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685188886349.b9e83af31985db6c1546bd630d639af6. 2023-05-27 12:01:26,931 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/info/77737bd596454ac9ba52a23a7ee8e9aa, entries=10, sequenceid=9, filesize=5.9 K 2023-05-27 12:01:26,931 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/.tmp/table/dd454e5f750a4825aefa6108a08881f8 as hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/table/dd454e5f750a4825aefa6108a08881f8 2023-05-27 12:01:26,936 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/table/dd454e5f750a4825aefa6108a08881f8, entries=2, sequenceid=9, filesize=4.7 K 2023-05-27 12:01:26,936 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 42ms, sequenceid=9, compaction requested=false 2023-05-27 12:01:26,942 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-27 12:01:26,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 12:01:26,943 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 12:01:26,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 12:01:26,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 12:01:27,094 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36915,1685188885831; all regions closed. 2023-05-27 12:01:27,095 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:27,100 DEBUG [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs 2023-05-27 12:01:27,100 INFO [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36915%2C1685188885831.meta:.meta(num 1685188886310) 2023-05-27 12:01:27,100 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/WALs/jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:27,104 DEBUG [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/oldWALs 2023-05-27 12:01:27,104 INFO [RS:0;jenkins-hbase4:36915] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36915%2C1685188885831:(num 1685188886208) 2023-05-27 12:01:27,104 DEBUG [RS:0;jenkins-hbase4:36915] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:27,104 INFO [RS:0;jenkins-hbase4:36915] regionserver.LeaseManager(133): Closed leases 2023-05-27 12:01:27,104 INFO [RS:0;jenkins-hbase4:36915] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 12:01:27,104 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 12:01:27,105 INFO [RS:0;jenkins-hbase4:36915] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36915 2023-05-27 12:01:27,108 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36915,1685188885831 2023-05-27 12:01:27,108 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 12:01:27,108 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 12:01:27,109 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36915,1685188885831] 2023-05-27 12:01:27,109 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36915,1685188885831; numProcessing=1 2023-05-27 12:01:27,110 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36915,1685188885831 already deleted, retry=false 2023-05-27 12:01:27,110 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36915,1685188885831 expired; onlineServers=0 2023-05-27 12:01:27,110 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37349,1685188885789' ***** 2023-05-27 12:01:27,110 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 12:01:27,111 DEBUG [M:0;jenkins-hbase4:37349] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7843ecc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 12:01:27,111 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:27,111 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37349,1685188885789; all regions closed. 2023-05-27 12:01:27,111 DEBUG [M:0;jenkins-hbase4:37349] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 12:01:27,111 DEBUG [M:0;jenkins-hbase4:37349] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 12:01:27,111 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 12:01:27,111 DEBUG [M:0;jenkins-hbase4:37349] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 12:01:27,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188885966] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685188885966,5,FailOnTimeoutGroup] 2023-05-27 12:01:27,112 INFO [M:0;jenkins-hbase4:37349] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 12:01:27,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188885959] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685188885959,5,FailOnTimeoutGroup] 2023-05-27 12:01:27,112 INFO [M:0;jenkins-hbase4:37349] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 12:01:27,113 INFO [M:0;jenkins-hbase4:37349] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 12:01:27,113 DEBUG [M:0;jenkins-hbase4:37349] master.HMaster(1512): Stopping service threads 2023-05-27 12:01:27,113 INFO [M:0;jenkins-hbase4:37349] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 12:01:27,113 ERROR [M:0;jenkins-hbase4:37349] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-27 12:01:27,113 INFO [M:0;jenkins-hbase4:37349] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 12:01:27,113 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 12:01:27,113 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 12:01:27,113 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 12:01:27,114 DEBUG [M:0;jenkins-hbase4:37349] zookeeper.ZKUtil(398): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 12:01:27,114 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 12:01:27,114 WARN [M:0;jenkins-hbase4:37349] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 12:01:27,114 INFO [M:0;jenkins-hbase4:37349] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 12:01:27,114 INFO [M:0;jenkins-hbase4:37349] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 12:01:27,115 DEBUG [M:0;jenkins-hbase4:37349] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 12:01:27,115 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:27,115 DEBUG [M:0;jenkins-hbase4:37349] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:27,115 DEBUG [M:0;jenkins-hbase4:37349] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 12:01:27,115 DEBUG [M:0;jenkins-hbase4:37349] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:27,115 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-05-27 12:01:27,124 INFO [M:0;jenkins-hbase4:37349] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f4f7fb004c984c74a62fd5ee10d8c57f 2023-05-27 12:01:27,128 DEBUG [M:0;jenkins-hbase4:37349] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f4f7fb004c984c74a62fd5ee10d8c57f as hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f4f7fb004c984c74a62fd5ee10d8c57f 2023-05-27 12:01:27,132 INFO [M:0;jenkins-hbase4:37349] regionserver.HStore(1080): Added hdfs://localhost:36327/user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f4f7fb004c984c74a62fd5ee10d8c57f, entries=8, sequenceid=66, filesize=6.3 K 2023-05-27 12:01:27,133 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 18ms, sequenceid=66, compaction requested=false 2023-05-27 12:01:27,134 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 12:01:27,134 DEBUG [M:0;jenkins-hbase4:37349] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 12:01:27,135 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0dd7aa3c-6353-c15c-fa8b-f96e3670da0b/MasterData/WALs/jenkins-hbase4.apache.org,37349,1685188885789 2023-05-27 12:01:27,138 INFO [M:0;jenkins-hbase4:37349] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 12:01:27,138 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 12:01:27,138 INFO [M:0;jenkins-hbase4:37349] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37349 2023-05-27 12:01:27,141 DEBUG [M:0;jenkins-hbase4:37349] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37349,1685188885789 already deleted, retry=false 2023-05-27 12:01:27,289 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:27,289 INFO [M:0;jenkins-hbase4:37349] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37349,1685188885789; zookeeper connection closed. 2023-05-27 12:01:27,289 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): master:37349-0x1006c8440100000, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:27,390 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:27,390 INFO [RS:0;jenkins-hbase4:36915] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36915,1685188885831; zookeeper connection closed. 2023-05-27 12:01:27,390 DEBUG [Listener at localhost/39827-EventThread] zookeeper.ZKWatcher(600): regionserver:36915-0x1006c8440100001, quorum=127.0.0.1:55033, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 12:01:27,390 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5ee0d82f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5ee0d82f 2023-05-27 12:01:27,391 INFO [Listener at localhost/39827] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 12:01:27,391 WARN [Listener at localhost/39827] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 12:01:27,394 INFO [Listener at localhost/39827] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 12:01:27,500 WARN [BP-1303540386-172.31.14.131-1685188885260 heartbeating to localhost/127.0.0.1:36327] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 12:01:27,500 WARN [BP-1303540386-172.31.14.131-1685188885260 heartbeating to localhost/127.0.0.1:36327] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1303540386-172.31.14.131-1685188885260 (Datanode Uuid e2032bd9-68a6-40b1-9fcb-f0f7f7fc81b4) service to localhost/127.0.0.1:36327 2023-05-27 12:01:27,500 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/dfs/data/data3/current/BP-1303540386-172.31.14.131-1685188885260] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:27,500 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/dfs/data/data4/current/BP-1303540386-172.31.14.131-1685188885260] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:27,501 WARN [Listener at localhost/39827] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 12:01:27,507 INFO [Listener at localhost/39827] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 12:01:27,609 WARN [BP-1303540386-172.31.14.131-1685188885260 heartbeating to localhost/127.0.0.1:36327] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 12:01:27,609 WARN [BP-1303540386-172.31.14.131-1685188885260 heartbeating to localhost/127.0.0.1:36327] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1303540386-172.31.14.131-1685188885260 (Datanode Uuid 0dc57100-f4af-4324-bd94-6e4ff691077c) service to localhost/127.0.0.1:36327 2023-05-27 12:01:27,610 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/dfs/data/data1/current/BP-1303540386-172.31.14.131-1685188885260] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:27,610 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/12a85e9f-0c62-e060-b266-840b8db7fe21/cluster_1e1413b2-91b7-eb82-ce1d-69db907d30b9/dfs/data/data2/current/BP-1303540386-172.31.14.131-1685188885260] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 12:01:27,619 INFO [Listener at localhost/39827] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 12:01:27,730 INFO [Listener at localhost/39827] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 12:01:27,740 INFO [Listener at localhost/39827] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 12:01:27,751 INFO [Listener at localhost/39827] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=132 (was 107) - Thread LEAK? -, OpenFileDescriptor=562 (was 534) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=39 (was 39), ProcessCount=169 (was 169), AvailableMemoryMB=3173 (was 3182)