2023-06-07 22:55:32,903 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916 2023-06-07 22:55:32,918 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-07 22:55:32,956 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=292, ProcessCount=171, AvailableMemoryMB=1768 2023-06-07 22:55:32,963 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-07 22:55:32,964 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4, deleteOnExit=true 2023-06-07 22:55:32,964 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-07 22:55:32,965 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/test.cache.data in system properties and HBase conf 2023-06-07 22:55:32,965 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/hadoop.tmp.dir in system properties and HBase conf 2023-06-07 22:55:32,966 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/hadoop.log.dir in system properties and HBase conf 2023-06-07 22:55:32,966 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-07 22:55:32,968 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-07 22:55:32,968 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-07 22:55:33,107 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-07 22:55:33,486 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-07 22:55:33,489 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:55:33,490 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:55:33,490 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-07 22:55:33,490 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:55:33,491 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-07 22:55:33,491 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-07 22:55:33,491 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:55:33,492 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:55:33,492 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-07 22:55:33,493 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/nfs.dump.dir in system properties and HBase conf 2023-06-07 22:55:33,493 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/java.io.tmpdir in system properties and HBase conf 2023-06-07 22:55:33,494 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:55:33,494 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-07 22:55:33,494 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-07 22:55:34,024 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:55:34,038 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:55:34,042 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:55:34,310 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-07 22:55:34,487 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-07 22:55:34,506 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:55:34,542 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:55:34,601 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/java.io.tmpdir/Jetty_localhost_34699_hdfs____7jkuwv/webapp 2023-06-07 22:55:34,742 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34699 2023-06-07 22:55:34,751 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:55:34,754 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:55:34,755 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:55:35,256 WARN [Listener at localhost/43147] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:55:35,333 WARN [Listener at localhost/43147] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:55:35,354 WARN [Listener at localhost/43147] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:55:35,363 INFO [Listener at localhost/43147] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:55:35,370 INFO [Listener at localhost/43147] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/java.io.tmpdir/Jetty_localhost_41229_datanode____.5tyzel/webapp 2023-06-07 22:55:35,477 INFO [Listener at localhost/43147] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41229 2023-06-07 22:55:35,759 WARN [Listener at localhost/45603] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:55:35,768 WARN [Listener at localhost/45603] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:55:35,773 WARN [Listener at localhost/45603] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:55:35,776 INFO [Listener at localhost/45603] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:55:35,781 INFO [Listener at localhost/45603] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/java.io.tmpdir/Jetty_localhost_41591_datanode____.lrwt61/webapp 2023-06-07 22:55:35,886 INFO [Listener at localhost/45603] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41591 2023-06-07 22:55:35,894 WARN [Listener at localhost/37029] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:55:36,203 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x64a2ca05001c66fc: Processing first storage report for DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d from datanode d19a713c-dcb8-4b12-8846-f0a59d9b0d49 2023-06-07 22:55:36,205 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x64a2ca05001c66fc: from storage DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d node DatanodeRegistration(127.0.0.1:39159, datanodeUuid=d19a713c-dcb8-4b12-8846-f0a59d9b0d49, infoPort=36845, infoSecurePort=0, ipcPort=45603, storageInfo=lv=-57;cid=testClusterID;nsid=1463789085;c=1686178534116), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-06-07 22:55:36,205 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd38163d553ed6901: Processing first storage report for DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3 from datanode dcc337b2-c192-4ff4-a747-01abf7c7d861 2023-06-07 22:55:36,206 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd38163d553ed6901: from storage DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3 node DatanodeRegistration(127.0.0.1:36469, datanodeUuid=dcc337b2-c192-4ff4-a747-01abf7c7d861, infoPort=37661, infoSecurePort=0, ipcPort=37029, storageInfo=lv=-57;cid=testClusterID;nsid=1463789085;c=1686178534116), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-07 22:55:36,206 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x64a2ca05001c66fc: Processing first storage report for DS-1d5b83fd-5f19-45f0-bff4-94d4e419408e from datanode d19a713c-dcb8-4b12-8846-f0a59d9b0d49 2023-06-07 22:55:36,206 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x64a2ca05001c66fc: from storage DS-1d5b83fd-5f19-45f0-bff4-94d4e419408e node DatanodeRegistration(127.0.0.1:39159, datanodeUuid=d19a713c-dcb8-4b12-8846-f0a59d9b0d49, infoPort=36845, infoSecurePort=0, ipcPort=45603, storageInfo=lv=-57;cid=testClusterID;nsid=1463789085;c=1686178534116), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:55:36,206 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd38163d553ed6901: Processing first storage report for DS-0308beca-6295-4db4-939b-cd34f65aeb27 from datanode dcc337b2-c192-4ff4-a747-01abf7c7d861 2023-06-07 22:55:36,206 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd38163d553ed6901: from storage DS-0308beca-6295-4db4-939b-cd34f65aeb27 node DatanodeRegistration(127.0.0.1:36469, datanodeUuid=dcc337b2-c192-4ff4-a747-01abf7c7d861, infoPort=37661, infoSecurePort=0, ipcPort=37029, storageInfo=lv=-57;cid=testClusterID;nsid=1463789085;c=1686178534116), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:55:36,297 DEBUG [Listener at localhost/37029] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916 2023-06-07 22:55:36,378 INFO [Listener at localhost/37029] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/zookeeper_0, clientPort=56943, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-07 22:55:36,397 INFO [Listener at localhost/37029] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56943 2023-06-07 22:55:36,408 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:36,411 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:37,078 INFO [Listener at localhost/37029] util.FSUtils(471): Created version file at hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9 with version=8 2023-06-07 22:55:37,079 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase-staging 2023-06-07 22:55:37,387 INFO [Listener at localhost/37029] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-07 22:55:37,853 INFO [Listener at localhost/37029] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:55:37,884 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:55:37,885 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:55:37,885 INFO [Listener at localhost/37029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:55:37,885 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:55:37,886 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:55:38,041 INFO [Listener at localhost/37029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:55:38,122 DEBUG [Listener at localhost/37029] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-07 22:55:38,219 INFO [Listener at localhost/37029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39149 2023-06-07 22:55:38,229 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:38,232 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:38,253 INFO [Listener at localhost/37029] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39149 connecting to ZooKeeper ensemble=127.0.0.1:56943 2023-06-07 22:55:38,293 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:391490x0, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:55:38,296 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39149-0x100a7811eaa0000 connected 2023-06-07 22:55:38,324 DEBUG [Listener at localhost/37029] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:55:38,324 DEBUG [Listener at localhost/37029] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:55:38,327 DEBUG [Listener at localhost/37029] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:55:38,335 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39149 2023-06-07 22:55:38,335 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39149 2023-06-07 22:55:38,335 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39149 2023-06-07 22:55:38,336 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39149 2023-06-07 22:55:38,336 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39149 2023-06-07 22:55:38,341 INFO [Listener at localhost/37029] master.HMaster(444): hbase.rootdir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9, hbase.cluster.distributed=false 2023-06-07 22:55:38,405 INFO [Listener at localhost/37029] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:55:38,406 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:55:38,406 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:55:38,406 INFO [Listener at localhost/37029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:55:38,406 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:55:38,406 INFO [Listener at localhost/37029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:55:38,411 INFO [Listener at localhost/37029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:55:38,414 INFO [Listener at localhost/37029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46337 2023-06-07 22:55:38,416 INFO [Listener at localhost/37029] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 22:55:38,422 DEBUG [Listener at localhost/37029] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 22:55:38,423 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:38,426 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:38,428 INFO [Listener at localhost/37029] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46337 connecting to ZooKeeper ensemble=127.0.0.1:56943 2023-06-07 22:55:38,432 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:463370x0, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:55:38,433 DEBUG [Listener at localhost/37029] zookeeper.ZKUtil(164): regionserver:463370x0, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:55:38,433 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46337-0x100a7811eaa0001 connected 2023-06-07 22:55:38,434 DEBUG [Listener at localhost/37029] zookeeper.ZKUtil(164): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:55:38,435 DEBUG [Listener at localhost/37029] zookeeper.ZKUtil(164): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:55:38,439 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46337 2023-06-07 22:55:38,439 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46337 2023-06-07 22:55:38,440 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46337 2023-06-07 22:55:38,442 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46337 2023-06-07 22:55:38,442 DEBUG [Listener at localhost/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46337 2023-06-07 22:55:38,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:38,461 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:55:38,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:38,484 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:55:38,484 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:55:38,484 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:38,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:55:38,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39149,1686178537215 from backup master directory 2023-06-07 22:55:38,486 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:55:38,489 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:38,490 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:55:38,490 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:55:38,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:38,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-07 22:55:38,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-07 22:55:38,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase.id with ID: c7b6f9d9-c999-4db6-aa44-978bc357ec39 2023-06-07 22:55:38,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:38,658 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:38,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x63f6cae1 to 127.0.0.1:56943 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:55:38,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75e4d180, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:55:38,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:55:38,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-07 22:55:38,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:55:38,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store-tmp 2023-06-07 22:55:38,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:38,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:55:38,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:55:38,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:55:38,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:55:38,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:55:38,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:55:38,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:55:38,836 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/WALs/jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:38,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39149%2C1686178537215, suffix=, logDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/WALs/jenkins-hbase4.apache.org,39149,1686178537215, archiveDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/oldWALs, maxLogs=10 2023-06-07 22:55:38,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:55:38,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/WALs/jenkins-hbase4.apache.org,39149,1686178537215/jenkins-hbase4.apache.org%2C39149%2C1686178537215.1686178538876 2023-06-07 22:55:38,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:55:38,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:55:38,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:38,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:55:38,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:55:38,966 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:55:38,974 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-07 22:55:39,000 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-07 22:55:39,015 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:55:39,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:55:39,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:55:39,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:55:39,050 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=877765, jitterRate=0.11613596975803375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:55:39,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:55:39,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-07 22:55:39,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-07 22:55:39,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-07 22:55:39,078 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-07 22:55:39,080 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-07 22:55:39,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 32 msec 2023-06-07 22:55:39,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-07 22:55:39,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-07 22:55:39,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-07 22:55:39,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-07 22:55:39,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-07 22:55:39,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-07 22:55:39,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-07 22:55:39,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-07 22:55:39,188 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:39,189 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-07 22:55:39,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-07 22:55:39,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-07 22:55:39,205 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:55:39,205 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:55:39,206 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:39,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39149,1686178537215, sessionid=0x100a7811eaa0000, setting cluster-up flag (Was=false) 2023-06-07 22:55:39,220 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:39,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-07 22:55:39,229 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:39,234 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:39,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-07 22:55:39,246 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:39,248 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.hbase-snapshot/.tmp 2023-06-07 22:55:39,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-07 22:55:39,349 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(951): ClusterId : c7b6f9d9-c999-4db6-aa44-978bc357ec39 2023-06-07 22:55:39,353 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 22:55:39,358 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 22:55:39,358 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:55:39,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686178569360 2023-06-07 22:55:39,361 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 22:55:39,362 DEBUG [RS:0;jenkins-hbase4:46337] zookeeper.ReadOnlyZKClient(139): Connect 0x19304fcc to 127.0.0.1:56943 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:55:39,363 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-07 22:55:39,368 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:55:39,369 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-07 22:55:39,369 DEBUG [RS:0;jenkins-hbase4:46337] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17f7b29a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:55:39,370 DEBUG [RS:0;jenkins-hbase4:46337] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13c824bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:55:39,373 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:55:39,374 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-07 22:55:39,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-07 22:55:39,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-07 22:55:39,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-07 22:55:39,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-07 22:55:39,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-07 22:55:39,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-07 22:55:39,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-07 22:55:39,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-07 22:55:39,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-07 22:55:39,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178539390,5,FailOnTimeoutGroup] 2023-06-07 22:55:39,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178539391,5,FailOnTimeoutGroup] 2023-06-07 22:55:39,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-07 22:55:39,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,409 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:55:39,411 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46337 2023-06-07 22:55:39,411 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:55:39,412 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9 2023-06-07 22:55:39,417 INFO [RS:0;jenkins-hbase4:46337] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 22:55:39,417 INFO [RS:0;jenkins-hbase4:46337] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 22:55:39,417 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 22:55:39,421 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,39149,1686178537215 with isa=jenkins-hbase4.apache.org/172.31.14.131:46337, startcode=1686178538405 2023-06-07 22:55:39,434 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:39,437 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:55:39,440 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/info 2023-06-07 22:55:39,441 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:55:39,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:55:39,446 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:55:39,447 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:55:39,447 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,448 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:55:39,450 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/table 2023-06-07 22:55:39,450 DEBUG [RS:0;jenkins-hbase4:46337] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 22:55:39,451 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:55:39,451 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,453 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740 2023-06-07 22:55:39,454 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740 2023-06-07 22:55:39,458 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:55:39,459 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:55:39,463 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:55:39,464 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=830655, jitterRate=0.05623370409011841}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:55:39,464 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:55:39,464 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:55:39,464 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:55:39,464 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:55:39,464 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:55:39,464 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:55:39,465 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:55:39,465 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:55:39,470 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:55:39,470 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-07 22:55:39,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-07 22:55:39,490 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-07 22:55:39,493 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-07 22:55:39,568 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44237, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 22:55:39,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,594 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9 2023-06-07 22:55:39,594 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43147 2023-06-07 22:55:39,594 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 22:55:39,601 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:55:39,601 DEBUG [RS:0;jenkins-hbase4:46337] zookeeper.ZKUtil(162): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,602 WARN [RS:0;jenkins-hbase4:46337] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:55:39,602 INFO [RS:0;jenkins-hbase4:46337] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:55:39,603 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,604 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46337,1686178538405] 2023-06-07 22:55:39,611 DEBUG [RS:0;jenkins-hbase4:46337] zookeeper.ZKUtil(162): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,621 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 22:55:39,629 INFO [RS:0;jenkins-hbase4:46337] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 22:55:39,645 DEBUG [jenkins-hbase4:39149] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-07 22:55:39,647 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46337,1686178538405, state=OPENING 2023-06-07 22:55:39,648 INFO [RS:0;jenkins-hbase4:46337] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 22:55:39,651 INFO [RS:0;jenkins-hbase4:46337] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:55:39,652 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,652 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 22:55:39,654 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-07 22:55:39,656 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:39,657 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:55:39,659 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,660 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,660 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,660 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,660 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,661 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,661 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:55:39,661 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,661 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,661 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46337,1686178538405}] 2023-06-07 22:55:39,661 DEBUG [RS:0;jenkins-hbase4:46337] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:55:39,662 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,663 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,663 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,683 INFO [RS:0;jenkins-hbase4:46337] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 22:55:39,686 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46337,1686178538405-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:39,702 INFO [RS:0;jenkins-hbase4:46337] regionserver.Replication(203): jenkins-hbase4.apache.org,46337,1686178538405 started 2023-06-07 22:55:39,702 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46337,1686178538405, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46337, sessionid=0x100a7811eaa0001 2023-06-07 22:55:39,702 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 22:55:39,703 DEBUG [RS:0;jenkins-hbase4:46337] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,703 DEBUG [RS:0;jenkins-hbase4:46337] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46337,1686178538405' 2023-06-07 22:55:39,703 DEBUG [RS:0;jenkins-hbase4:46337] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:55:39,703 DEBUG [RS:0;jenkins-hbase4:46337] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46337,1686178538405' 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 22:55:39,704 DEBUG [RS:0;jenkins-hbase4:46337] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 22:55:39,705 INFO [RS:0;jenkins-hbase4:46337] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 22:55:39,705 INFO [RS:0;jenkins-hbase4:46337] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 22:55:39,815 INFO [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46337%2C1686178538405, suffix=, logDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405, archiveDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/oldWALs, maxLogs=32 2023-06-07 22:55:39,830 INFO [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178539818 2023-06-07 22:55:39,830 DEBUG [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:55:39,849 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:39,852 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 22:55:39,855 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 22:55:39,865 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-07 22:55:39,866 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:55:39,869 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46337%2C1686178538405.meta, suffix=.meta, logDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405, archiveDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/oldWALs, maxLogs=32 2023-06-07 22:55:39,883 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.meta.1686178539870.meta 2023-06-07 22:55:39,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:55:39,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:55:39,885 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-07 22:55:39,903 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-07 22:55:39,908 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-07 22:55:39,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-07 22:55:39,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:39,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-07 22:55:39,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-07 22:55:39,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:55:39,918 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/info 2023-06-07 22:55:39,919 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/info 2023-06-07 22:55:39,919 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:55:39,921 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,921 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:55:39,922 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:55:39,923 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:55:39,923 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:55:39,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:55:39,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/table 2023-06-07 22:55:39,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/table 2023-06-07 22:55:39,927 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:55:39,928 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:39,929 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740 2023-06-07 22:55:39,932 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740 2023-06-07 22:55:39,936 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:55:39,939 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:55:39,941 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=783610, jitterRate=-0.0035888254642486572}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:55:39,941 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:55:39,952 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686178539841 2023-06-07 22:55:39,969 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-07 22:55:39,970 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-07 22:55:39,971 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46337,1686178538405, state=OPEN 2023-06-07 22:55:39,974 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-07 22:55:39,975 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:55:39,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-07 22:55:39,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46337,1686178538405 in 314 msec 2023-06-07 22:55:39,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-07 22:55:39,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 502 msec 2023-06-07 22:55:39,990 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 699 msec 2023-06-07 22:55:39,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686178539991, completionTime=-1 2023-06-07 22:55:39,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-07 22:55:39,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-07 22:55:40,050 DEBUG [hconnection-0x33fd86c0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:55:40,053 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57844, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:55:40,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-07 22:55:40,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686178600069 2023-06-07 22:55:40,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686178660070 2023-06-07 22:55:40,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 78 msec 2023-06-07 22:55:40,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39149,1686178537215-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:40,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39149,1686178537215-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:40,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39149,1686178537215-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:40,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39149, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:40,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-07 22:55:40,098 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-07 22:55:40,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-07 22:55:40,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:55:40,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-07 22:55:40,119 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:55:40,121 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:55:40,141 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,143 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668 empty. 2023-06-07 22:55:40,144 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,144 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-07 22:55:40,169 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-07 22:55:40,172 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a82d35c1a007f276f9578957f2f6d668, NAME => 'hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp 2023-06-07 22:55:40,187 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:40,187 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a82d35c1a007f276f9578957f2f6d668, disabling compactions & flushes 2023-06-07 22:55:40,187 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,187 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,187 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. after waiting 0 ms 2023-06-07 22:55:40,187 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,187 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,187 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a82d35c1a007f276f9578957f2f6d668: 2023-06-07 22:55:40,191 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:55:40,206 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178540194"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178540194"}]},"ts":"1686178540194"} 2023-06-07 22:55:40,232 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:55:40,234 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:55:40,239 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178540234"}]},"ts":"1686178540234"} 2023-06-07 22:55:40,243 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-07 22:55:40,253 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a82d35c1a007f276f9578957f2f6d668, ASSIGN}] 2023-06-07 22:55:40,256 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a82d35c1a007f276f9578957f2f6d668, ASSIGN 2023-06-07 22:55:40,257 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a82d35c1a007f276f9578957f2f6d668, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46337,1686178538405; forceNewPlan=false, retain=false 2023-06-07 22:55:40,408 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a82d35c1a007f276f9578957f2f6d668, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:40,409 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178540408"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178540408"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178540408"}]},"ts":"1686178540408"} 2023-06-07 22:55:40,415 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure a82d35c1a007f276f9578957f2f6d668, server=jenkins-hbase4.apache.org,46337,1686178538405}] 2023-06-07 22:55:40,575 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a82d35c1a007f276f9578957f2f6d668, NAME => 'hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:55:40,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,579 INFO [StoreOpener-a82d35c1a007f276f9578957f2f6d668-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,582 DEBUG [StoreOpener-a82d35c1a007f276f9578957f2f6d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/info 2023-06-07 22:55:40,582 DEBUG [StoreOpener-a82d35c1a007f276f9578957f2f6d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/info 2023-06-07 22:55:40,582 INFO [StoreOpener-a82d35c1a007f276f9578957f2f6d668-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a82d35c1a007f276f9578957f2f6d668 columnFamilyName info 2023-06-07 22:55:40,583 INFO [StoreOpener-a82d35c1a007f276f9578957f2f6d668-1] regionserver.HStore(310): Store=a82d35c1a007f276f9578957f2f6d668/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:40,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,590 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:55:40,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:55:40,594 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a82d35c1a007f276f9578957f2f6d668; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=752192, jitterRate=-0.043539151549339294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:55:40,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a82d35c1a007f276f9578957f2f6d668: 2023-06-07 22:55:40,596 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668., pid=6, masterSystemTime=1686178540569 2023-06-07 22:55:40,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:55:40,604 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a82d35c1a007f276f9578957f2f6d668, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:40,604 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178540603"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178540603"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178540603"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178540603"}]},"ts":"1686178540603"} 2023-06-07 22:55:40,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-07 22:55:40,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure a82d35c1a007f276f9578957f2f6d668, server=jenkins-hbase4.apache.org,46337,1686178538405 in 193 msec 2023-06-07 22:55:40,615 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-07 22:55:40,616 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a82d35c1a007f276f9578957f2f6d668, ASSIGN in 359 msec 2023-06-07 22:55:40,617 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:55:40,617 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178540617"}]},"ts":"1686178540617"} 2023-06-07 22:55:40,620 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-07 22:55:40,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-07 22:55:40,624 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:55:40,625 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:55:40,625 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:40,628 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 516 msec 2023-06-07 22:55:40,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-07 22:55:40,679 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:55:40,687 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 31 msec 2023-06-07 22:55:40,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-07 22:55:40,706 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:55:40,710 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-06-07 22:55:40,721 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-07 22:55:40,723 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-07 22:55:40,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.232sec 2023-06-07 22:55:40,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-07 22:55:40,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-07 22:55:40,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-07 22:55:40,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39149,1686178537215-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-07 22:55:40,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39149,1686178537215-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-07 22:55:40,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-07 22:55:40,755 DEBUG [Listener at localhost/37029] zookeeper.ReadOnlyZKClient(139): Connect 0x7036f427 to 127.0.0.1:56943 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:55:40,759 DEBUG [Listener at localhost/37029] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12c52b8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:55:40,784 DEBUG [hconnection-0x7ad7f853-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:55:40,796 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36694, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:55:40,806 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:55:40,807 INFO [Listener at localhost/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:55:40,814 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-07 22:55:40,814 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:55:40,815 INFO [Listener at localhost/37029] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-07 22:55:40,824 DEBUG [Listener at localhost/37029] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-07 22:55:40,828 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47596, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-07 22:55:40,837 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-07 22:55:40,837 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-07 22:55:40,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:55:40,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-07 22:55:40,845 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:55:40,847 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:55:40,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-07 22:55:40,851 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:40,852 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec empty. 2023-06-07 22:55:40,854 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:40,854 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-07 22:55:40,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:55:40,879 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-07 22:55:40,881 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 620cec8671a9d14c4b8fce34c523f8ec, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/.tmp 2023-06-07 22:55:40,897 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:40,897 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 620cec8671a9d14c4b8fce34c523f8ec, disabling compactions & flushes 2023-06-07 22:55:40,897 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:40,897 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:40,897 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. after waiting 0 ms 2023-06-07 22:55:40,897 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:40,897 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:40,897 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:55:40,901 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:55:40,903 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686178540903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178540903"}]},"ts":"1686178540903"} 2023-06-07 22:55:40,908 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:55:40,910 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:55:40,910 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178540910"}]},"ts":"1686178540910"} 2023-06-07 22:55:40,913 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-07 22:55:40,919 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=620cec8671a9d14c4b8fce34c523f8ec, ASSIGN}] 2023-06-07 22:55:40,921 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=620cec8671a9d14c4b8fce34c523f8ec, ASSIGN 2023-06-07 22:55:40,922 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=620cec8671a9d14c4b8fce34c523f8ec, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46337,1686178538405; forceNewPlan=false, retain=false 2023-06-07 22:55:41,073 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=620cec8671a9d14c4b8fce34c523f8ec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:41,074 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686178541073"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178541073"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178541073"}]},"ts":"1686178541073"} 2023-06-07 22:55:41,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 620cec8671a9d14c4b8fce34c523f8ec, server=jenkins-hbase4.apache.org,46337,1686178538405}] 2023-06-07 22:55:41,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:41,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 620cec8671a9d14c4b8fce34c523f8ec, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:55:41,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:55:41,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,240 INFO [StoreOpener-620cec8671a9d14c4b8fce34c523f8ec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,243 DEBUG [StoreOpener-620cec8671a9d14c4b8fce34c523f8ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info 2023-06-07 22:55:41,243 DEBUG [StoreOpener-620cec8671a9d14c4b8fce34c523f8ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info 2023-06-07 22:55:41,244 INFO [StoreOpener-620cec8671a9d14c4b8fce34c523f8ec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 620cec8671a9d14c4b8fce34c523f8ec columnFamilyName info 2023-06-07 22:55:41,244 INFO [StoreOpener-620cec8671a9d14c4b8fce34c523f8ec-1] regionserver.HStore(310): Store=620cec8671a9d14c4b8fce34c523f8ec/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:55:41,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:55:41,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:55:41,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 620cec8671a9d14c4b8fce34c523f8ec; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=819532, jitterRate=0.042090028524398804}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:55:41,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:55:41,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec., pid=11, masterSystemTime=1686178541231 2023-06-07 22:55:41,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:41,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:55:41,262 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=620cec8671a9d14c4b8fce34c523f8ec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:55:41,263 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686178541262"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178541262"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178541262"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178541262"}]},"ts":"1686178541262"} 2023-06-07 22:55:41,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-07 22:55:41,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 620cec8671a9d14c4b8fce34c523f8ec, server=jenkins-hbase4.apache.org,46337,1686178538405 in 189 msec 2023-06-07 22:55:41,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-07 22:55:41,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=620cec8671a9d14c4b8fce34c523f8ec, ASSIGN in 351 msec 2023-06-07 22:55:41,276 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:55:41,276 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178541276"}]},"ts":"1686178541276"} 2023-06-07 22:55:41,278 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-07 22:55:41,281 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:55:41,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 440 msec 2023-06-07 22:55:45,423 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-07 22:55:45,626 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-07 22:55:45,627 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-07 22:55:45,628 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-07 22:55:47,382 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-07 22:55:47,383 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-07 22:55:50,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39149] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:55:50,867 INFO [Listener at localhost/37029] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-07 22:55:50,870 DEBUG [Listener at localhost/37029] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-07 22:55:50,871 DEBUG [Listener at localhost/37029] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:56:02,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46337] regionserver.HRegion(9158): Flush requested on 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:56:02,899 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 620cec8671a9d14c4b8fce34c523f8ec 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 22:56:02,968 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/b8bebf9da23b43b59a173c5fa5f6f777 2023-06-07 22:56:03,010 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/b8bebf9da23b43b59a173c5fa5f6f777 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777 2023-06-07 22:56:03,023 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777, entries=7, sequenceid=11, filesize=12.1 K 2023-06-07 22:56:03,025 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 620cec8671a9d14c4b8fce34c523f8ec in 127ms, sequenceid=11, compaction requested=false 2023-06-07 22:56:03,027 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:56:11,110 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:13,313 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:15,516 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:17,719 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:17,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46337] regionserver.HRegion(9158): Flush requested on 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:56:17,720 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 620cec8671a9d14c4b8fce34c523f8ec 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 22:56:17,921 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:17,939 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/c1b51315f492410c82aa3d1f25db73f8 2023-06-07 22:56:17,947 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/c1b51315f492410c82aa3d1f25db73f8 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/c1b51315f492410c82aa3d1f25db73f8 2023-06-07 22:56:17,957 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/c1b51315f492410c82aa3d1f25db73f8, entries=7, sequenceid=21, filesize=12.1 K 2023-06-07 22:56:18,158 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:18,159 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 620cec8671a9d14c4b8fce34c523f8ec in 439ms, sequenceid=21, compaction requested=false 2023-06-07 22:56:18,159 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:56:18,159 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-07 22:56:18,159 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 22:56:18,161 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777 because midkey is the same as first or last row 2023-06-07 22:56:19,922 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:22,125 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:22,126 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46337%2C1686178538405:(num 1686178539818) roll requested 2023-06-07 22:56:22,126 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:22,340 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK], DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK]] 2023-06-07 22:56:22,341 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178539818 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178582126 2023-06-07 22:56:22,342 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:22,342 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178539818 is not closed yet, will try archiving it next time 2023-06-07 22:56:32,138 INFO [Listener at localhost/37029] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-07 22:56:37,141 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:37,141 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:37,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46337] regionserver.HRegion(9158): Flush requested on 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:56:37,141 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46337%2C1686178538405:(num 1686178582126) roll requested 2023-06-07 22:56:37,141 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 620cec8671a9d14c4b8fce34c523f8ec 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 22:56:39,142 INFO [Listener at localhost/37029] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-07 22:56:42,143 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:42,143 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:42,155 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:42,155 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:42,157 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178582126 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178597141 2023-06-07 22:56:42,158 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36469,DS-c48b7d3a-ee64-4b4d-8d2d-05ec20fc63c3,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-fbebe618-7125-4efa-a2a2-f87f19f5f32d,DISK]] 2023-06-07 22:56:42,158 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178582126 is not closed yet, will try archiving it next time 2023-06-07 22:56:42,163 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/ff2561f1a7594dc89b8fc0aed2634444 2023-06-07 22:56:42,172 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/ff2561f1a7594dc89b8fc0aed2634444 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/ff2561f1a7594dc89b8fc0aed2634444 2023-06-07 22:56:42,181 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/ff2561f1a7594dc89b8fc0aed2634444, entries=7, sequenceid=31, filesize=12.1 K 2023-06-07 22:56:42,185 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 620cec8671a9d14c4b8fce34c523f8ec in 5044ms, sequenceid=31, compaction requested=true 2023-06-07 22:56:42,185 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:56:42,185 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-07 22:56:42,185 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 22:56:42,185 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777 because midkey is the same as first or last row 2023-06-07 22:56:42,187 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 22:56:42,187 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 22:56:42,192 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 22:56:42,194 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.HStore(1912): 620cec8671a9d14c4b8fce34c523f8ec/info is initiating minor compaction (all files) 2023-06-07 22:56:42,194 INFO [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 620cec8671a9d14c4b8fce34c523f8ec/info in TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:56:42,195 INFO [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/c1b51315f492410c82aa3d1f25db73f8, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/ff2561f1a7594dc89b8fc0aed2634444] into tmpdir=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp, totalSize=36.3 K 2023-06-07 22:56:42,196 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] compactions.Compactor(207): Compacting b8bebf9da23b43b59a173c5fa5f6f777, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686178550876 2023-06-07 22:56:42,197 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] compactions.Compactor(207): Compacting c1b51315f492410c82aa3d1f25db73f8, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1686178564900 2023-06-07 22:56:42,197 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] compactions.Compactor(207): Compacting ff2561f1a7594dc89b8fc0aed2634444, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1686178579721 2023-06-07 22:56:42,223 INFO [RS:0;jenkins-hbase4:46337-shortCompactions-0] throttle.PressureAwareThroughputController(145): 620cec8671a9d14c4b8fce34c523f8ec#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 22:56:42,242 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/714d23e0ea484e7a812214665edacd7f as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/714d23e0ea484e7a812214665edacd7f 2023-06-07 22:56:42,258 INFO [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 620cec8671a9d14c4b8fce34c523f8ec/info of 620cec8671a9d14c4b8fce34c523f8ec into 714d23e0ea484e7a812214665edacd7f(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 22:56:42,258 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:56:42,258 INFO [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec., storeName=620cec8671a9d14c4b8fce34c523f8ec/info, priority=13, startTime=1686178602187; duration=0sec 2023-06-07 22:56:42,259 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-07 22:56:42,259 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 22:56:42,259 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/714d23e0ea484e7a812214665edacd7f because midkey is the same as first or last row 2023-06-07 22:56:42,260 DEBUG [RS:0;jenkins-hbase4:46337-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 22:56:42,566 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178582126 to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/oldWALs/jenkins-hbase4.apache.org%2C46337%2C1686178538405.1686178582126 2023-06-07 22:56:54,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46337] regionserver.HRegion(9158): Flush requested on 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:56:54,262 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 620cec8671a9d14c4b8fce34c523f8ec 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 22:56:54,279 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/7777b2b6ada54dd5aa9c57a910a7a873 2023-06-07 22:56:54,287 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/7777b2b6ada54dd5aa9c57a910a7a873 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/7777b2b6ada54dd5aa9c57a910a7a873 2023-06-07 22:56:54,294 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/7777b2b6ada54dd5aa9c57a910a7a873, entries=7, sequenceid=42, filesize=12.1 K 2023-06-07 22:56:54,295 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 620cec8671a9d14c4b8fce34c523f8ec in 33ms, sequenceid=42, compaction requested=false 2023-06-07 22:56:54,295 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:56:54,295 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-07 22:56:54,295 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 22:56:54,295 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/714d23e0ea484e7a812214665edacd7f because midkey is the same as first or last row 2023-06-07 22:57:02,270 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-07 22:57:02,272 INFO [Listener at localhost/37029] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-07 22:57:02,272 DEBUG [Listener at localhost/37029] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7036f427 to 127.0.0.1:56943 2023-06-07 22:57:02,272 DEBUG [Listener at localhost/37029] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:02,273 DEBUG [Listener at localhost/37029] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-07 22:57:02,273 DEBUG [Listener at localhost/37029] util.JVMClusterUtil(257): Found active master hash=619720735, stopped=false 2023-06-07 22:57:02,273 INFO [Listener at localhost/37029] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:57:02,275 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:02,275 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:02,275 INFO [Listener at localhost/37029] procedure2.ProcedureExecutor(629): Stopping 2023-06-07 22:57:02,276 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:02,276 DEBUG [Listener at localhost/37029] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x63f6cae1 to 127.0.0.1:56943 2023-06-07 22:57:02,276 DEBUG [Listener at localhost/37029] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:02,277 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:02,277 INFO [Listener at localhost/37029] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46337,1686178538405' ***** 2023-06-07 22:57:02,277 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:02,277 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1064): Closing user regions 2023-06-07 22:57:02,277 INFO [Listener at localhost/37029] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 22:57:02,277 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(3303): Received CLOSE for 620cec8671a9d14c4b8fce34c523f8ec 2023-06-07 22:57:02,278 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(3303): Received CLOSE for a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:57:02,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 620cec8671a9d14c4b8fce34c523f8ec, disabling compactions & flushes 2023-06-07 22:57:02,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:57:02,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:57:02,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. after waiting 0 ms 2023-06-07 22:57:02,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:57:02,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 620cec8671a9d14c4b8fce34c523f8ec 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-07 22:57:02,279 INFO [RS:0;jenkins-hbase4:46337] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 22:57:02,280 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(3305): Received CLOSE for the region: a82d35c1a007f276f9578957f2f6d668, which we are already trying to CLOSE, but not completed yet 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:57:02,280 DEBUG [RS:0;jenkins-hbase4:46337] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x19304fcc to 127.0.0.1:56943 2023-06-07 22:57:02,280 DEBUG [RS:0;jenkins-hbase4:46337] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 22:57:02,280 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 22:57:02,281 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-07 22:57:02,281 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1478): Online Regions={620cec8671a9d14c4b8fce34c523f8ec=TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec., 1588230740=hbase:meta,,1.1588230740, a82d35c1a007f276f9578957f2f6d668=hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668.} 2023-06-07 22:57:02,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:57:02,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:57:02,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:57:02,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:57:02,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:57:02,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-07 22:57:02,283 DEBUG [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1504): Waiting on 1588230740, 620cec8671a9d14c4b8fce34c523f8ec, a82d35c1a007f276f9578957f2f6d668 2023-06-07 22:57:02,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/598f89f1776d4e198d10ca0df10ee1b8 2023-06-07 22:57:02,308 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/.tmp/info/5c6efa24e6f643c6a37aa3d9c6f2e3b2 2023-06-07 22:57:02,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/.tmp/info/598f89f1776d4e198d10ca0df10ee1b8 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/598f89f1776d4e198d10ca0df10ee1b8 2023-06-07 22:57:02,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/598f89f1776d4e198d10ca0df10ee1b8, entries=3, sequenceid=48, filesize=7.9 K 2023-06-07 22:57:02,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 620cec8671a9d14c4b8fce34c523f8ec in 46ms, sequenceid=48, compaction requested=true 2023-06-07 22:57:02,326 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/c1b51315f492410c82aa3d1f25db73f8, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/ff2561f1a7594dc89b8fc0aed2634444] to archive 2023-06-07 22:57:02,328 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-07 22:57:02,335 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777 to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/archive/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/b8bebf9da23b43b59a173c5fa5f6f777 2023-06-07 22:57:02,337 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/c1b51315f492410c82aa3d1f25db73f8 to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/archive/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/c1b51315f492410c82aa3d1f25db73f8 2023-06-07 22:57:02,339 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/ff2561f1a7594dc89b8fc0aed2634444 to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/archive/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/info/ff2561f1a7594dc89b8fc0aed2634444 2023-06-07 22:57:02,340 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/.tmp/table/d2f87be92141419aa44c79acc64f238e 2023-06-07 22:57:02,349 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/.tmp/info/5c6efa24e6f643c6a37aa3d9c6f2e3b2 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/info/5c6efa24e6f643c6a37aa3d9c6f2e3b2 2023-06-07 22:57:02,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/info/5c6efa24e6f643c6a37aa3d9c6f2e3b2, entries=20, sequenceid=14, filesize=7.4 K 2023-06-07 22:57:02,357 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/.tmp/table/d2f87be92141419aa44c79acc64f238e as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/table/d2f87be92141419aa44c79acc64f238e 2023-06-07 22:57:02,367 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/table/d2f87be92141419aa44c79acc64f238e, entries=4, sequenceid=14, filesize=4.8 K 2023-06-07 22:57:02,368 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 87ms, sequenceid=14, compaction requested=false 2023-06-07 22:57:02,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/default/TestLogRolling-testSlowSyncLogRolling/620cec8671a9d14c4b8fce34c523f8ec/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-07 22:57:02,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:57:02,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 620cec8671a9d14c4b8fce34c523f8ec: 2023-06-07 22:57:02,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1686178540837.620cec8671a9d14c4b8fce34c523f8ec. 2023-06-07 22:57:02,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a82d35c1a007f276f9578957f2f6d668, disabling compactions & flushes 2023-06-07 22:57:02,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:57:02,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:57:02,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. after waiting 0 ms 2023-06-07 22:57:02,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:57:02,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a82d35c1a007f276f9578957f2f6d668 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-07 22:57:02,378 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-07 22:57:02,379 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-07 22:57:02,380 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:57:02,380 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:57:02,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-07 22:57:02,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/.tmp/info/529e4d4012934b59934cc25894d35666 2023-06-07 22:57:02,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/.tmp/info/529e4d4012934b59934cc25894d35666 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/info/529e4d4012934b59934cc25894d35666 2023-06-07 22:57:02,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/info/529e4d4012934b59934cc25894d35666, entries=2, sequenceid=6, filesize=4.8 K 2023-06-07 22:57:02,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for a82d35c1a007f276f9578957f2f6d668 in 31ms, sequenceid=6, compaction requested=false 2023-06-07 22:57:02,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/data/hbase/namespace/a82d35c1a007f276f9578957f2f6d668/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-07 22:57:02,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:57:02,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a82d35c1a007f276f9578957f2f6d668: 2023-06-07 22:57:02,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686178540106.a82d35c1a007f276f9578957f2f6d668. 2023-06-07 22:57:02,483 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46337,1686178538405; all regions closed. 2023-06-07 22:57:02,484 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:57:02,491 DEBUG [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/oldWALs 2023-06-07 22:57:02,491 INFO [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46337%2C1686178538405.meta:.meta(num 1686178539870) 2023-06-07 22:57:02,491 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/WALs/jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:57:02,501 DEBUG [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/oldWALs 2023-06-07 22:57:02,501 INFO [RS:0;jenkins-hbase4:46337] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46337%2C1686178538405:(num 1686178597141) 2023-06-07 22:57:02,501 DEBUG [RS:0;jenkins-hbase4:46337] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:02,501 INFO [RS:0;jenkins-hbase4:46337] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:57:02,502 INFO [RS:0;jenkins-hbase4:46337] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-07 22:57:02,502 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:57:02,503 INFO [RS:0;jenkins-hbase4:46337] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46337 2023-06-07 22:57:02,509 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:02,509 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46337,1686178538405 2023-06-07 22:57:02,509 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:02,510 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46337,1686178538405] 2023-06-07 22:57:02,511 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46337,1686178538405; numProcessing=1 2023-06-07 22:57:02,517 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46337,1686178538405 already deleted, retry=false 2023-06-07 22:57:02,517 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46337,1686178538405 expired; onlineServers=0 2023-06-07 22:57:02,517 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,39149,1686178537215' ***** 2023-06-07 22:57:02,517 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-07 22:57:02,518 DEBUG [M:0;jenkins-hbase4:39149] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65cd3a24, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:57:02,518 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:57:02,518 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39149,1686178537215; all regions closed. 2023-06-07 22:57:02,518 DEBUG [M:0;jenkins-hbase4:39149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:02,518 DEBUG [M:0;jenkins-hbase4:39149] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-07 22:57:02,518 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-07 22:57:02,519 DEBUG [M:0;jenkins-hbase4:39149] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-07 22:57:02,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178539390] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178539390,5,FailOnTimeoutGroup] 2023-06-07 22:57:02,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178539391] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178539391,5,FailOnTimeoutGroup] 2023-06-07 22:57:02,519 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-07 22:57:02,519 INFO [M:0;jenkins-hbase4:39149] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-07 22:57:02,520 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:02,520 INFO [M:0;jenkins-hbase4:39149] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-07 22:57:02,520 INFO [M:0;jenkins-hbase4:39149] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-07 22:57:02,520 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:57:02,520 DEBUG [M:0;jenkins-hbase4:39149] master.HMaster(1512): Stopping service threads 2023-06-07 22:57:02,520 INFO [M:0;jenkins-hbase4:39149] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-07 22:57:02,521 INFO [M:0;jenkins-hbase4:39149] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-07 22:57:02,521 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-07 22:57:02,522 DEBUG [M:0;jenkins-hbase4:39149] zookeeper.ZKUtil(398): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-07 22:57:02,522 WARN [M:0;jenkins-hbase4:39149] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-07 22:57:02,522 INFO [M:0;jenkins-hbase4:39149] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-07 22:57:02,522 INFO [M:0;jenkins-hbase4:39149] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-07 22:57:02,522 DEBUG [M:0;jenkins-hbase4:39149] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:57:02,523 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:02,523 DEBUG [M:0;jenkins-hbase4:39149] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:02,523 DEBUG [M:0;jenkins-hbase4:39149] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:57:02,523 DEBUG [M:0;jenkins-hbase4:39149] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:02,523 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.27 KB heapSize=46.71 KB 2023-06-07 22:57:02,537 INFO [M:0;jenkins-hbase4:39149] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.27 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/87e0ae826f8b4c51a79db334689ddf09 2023-06-07 22:57:02,544 INFO [M:0;jenkins-hbase4:39149] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 87e0ae826f8b4c51a79db334689ddf09 2023-06-07 22:57:02,545 DEBUG [M:0;jenkins-hbase4:39149] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/87e0ae826f8b4c51a79db334689ddf09 as hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/87e0ae826f8b4c51a79db334689ddf09 2023-06-07 22:57:02,551 INFO [M:0;jenkins-hbase4:39149] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 87e0ae826f8b4c51a79db334689ddf09 2023-06-07 22:57:02,552 INFO [M:0;jenkins-hbase4:39149] regionserver.HStore(1080): Added hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/87e0ae826f8b4c51a79db334689ddf09, entries=11, sequenceid=100, filesize=6.1 K 2023-06-07 22:57:02,553 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegion(2948): Finished flush of dataSize ~38.27 KB/39184, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=100, compaction requested=false 2023-06-07 22:57:02,554 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:02,554 DEBUG [M:0;jenkins-hbase4:39149] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:57:02,554 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/MasterData/WALs/jenkins-hbase4.apache.org,39149,1686178537215 2023-06-07 22:57:02,558 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:57:02,559 INFO [M:0;jenkins-hbase4:39149] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-07 22:57:02,559 INFO [M:0;jenkins-hbase4:39149] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39149 2023-06-07 22:57:02,562 DEBUG [M:0;jenkins-hbase4:39149] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39149,1686178537215 already deleted, retry=false 2023-06-07 22:57:02,611 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:02,611 INFO [RS:0;jenkins-hbase4:46337] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46337,1686178538405; zookeeper connection closed. 2023-06-07 22:57:02,611 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46337-0x100a7811eaa0001, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:02,611 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34b46a9d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34b46a9d 2023-06-07 22:57:02,612 INFO [Listener at localhost/37029] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-07 22:57:02,711 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:02,711 INFO [M:0;jenkins-hbase4:39149] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39149,1686178537215; zookeeper connection closed. 2023-06-07 22:57:02,711 DEBUG [Listener at localhost/37029-EventThread] zookeeper.ZKWatcher(600): master:39149-0x100a7811eaa0000, quorum=127.0.0.1:56943, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:02,713 WARN [Listener at localhost/37029] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:02,716 INFO [Listener at localhost/37029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:02,821 WARN [BP-1685428129-172.31.14.131-1686178534116 heartbeating to localhost/127.0.0.1:43147] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:02,821 WARN [BP-1685428129-172.31.14.131-1686178534116 heartbeating to localhost/127.0.0.1:43147] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1685428129-172.31.14.131-1686178534116 (Datanode Uuid dcc337b2-c192-4ff4-a747-01abf7c7d861) service to localhost/127.0.0.1:43147 2023-06-07 22:57:02,823 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/dfs/data/data3/current/BP-1685428129-172.31.14.131-1686178534116] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:02,823 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/dfs/data/data4/current/BP-1685428129-172.31.14.131-1686178534116] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:02,824 WARN [Listener at localhost/37029] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:02,826 INFO [Listener at localhost/37029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:02,929 WARN [BP-1685428129-172.31.14.131-1686178534116 heartbeating to localhost/127.0.0.1:43147] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:02,929 WARN [BP-1685428129-172.31.14.131-1686178534116 heartbeating to localhost/127.0.0.1:43147] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1685428129-172.31.14.131-1686178534116 (Datanode Uuid d19a713c-dcb8-4b12-8846-f0a59d9b0d49) service to localhost/127.0.0.1:43147 2023-06-07 22:57:02,930 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/dfs/data/data1/current/BP-1685428129-172.31.14.131-1686178534116] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:02,930 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/cluster_8f25a331-5b97-aa66-d890-c4a115fde1f4/dfs/data/data2/current/BP-1685428129-172.31.14.131-1686178534116] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:02,964 INFO [Listener at localhost/37029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:03,076 INFO [Listener at localhost/37029] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-07 22:57:03,110 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-07 22:57:03,121 INFO [Listener at localhost/37029] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:43147 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43147 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37029 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:43147 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@d26a43 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:43147 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:43147 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) - Thread LEAK? -, OpenFileDescriptor=439 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=81 (was 292), ProcessCount=170 (was 171), AvailableMemoryMB=1034 (was 1768) 2023-06-07 22:57:03,130 INFO [Listener at localhost/37029] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=439, MaxFileDescriptor=60000, SystemLoadAverage=81, ProcessCount=170, AvailableMemoryMB=1034 2023-06-07 22:57:03,130 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-07 22:57:03,130 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/hadoop.log.dir so I do NOT create it in target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780 2023-06-07 22:57:03,131 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/264e92d7-6d85-4a27-ae97-606384e0c916/hadoop.tmp.dir so I do NOT create it in target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780 2023-06-07 22:57:03,131 INFO [Listener at localhost/37029] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a, deleteOnExit=true 2023-06-07 22:57:03,131 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-07 22:57:03,131 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/test.cache.data in system properties and HBase conf 2023-06-07 22:57:03,131 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/hadoop.tmp.dir in system properties and HBase conf 2023-06-07 22:57:03,132 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/hadoop.log.dir in system properties and HBase conf 2023-06-07 22:57:03,132 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-07 22:57:03,132 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-07 22:57:03,132 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-07 22:57:03,132 DEBUG [Listener at localhost/37029] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:57:03,133 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-07 22:57:03,134 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/nfs.dump.dir in system properties and HBase conf 2023-06-07 22:57:03,134 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir in system properties and HBase conf 2023-06-07 22:57:03,134 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:57:03,134 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-07 22:57:03,134 INFO [Listener at localhost/37029] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-07 22:57:03,136 WARN [Listener at localhost/37029] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:57:03,138 WARN [Listener at localhost/37029] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:57:03,139 WARN [Listener at localhost/37029] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:57:03,180 WARN [Listener at localhost/37029] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:03,183 INFO [Listener at localhost/37029] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:03,187 INFO [Listener at localhost/37029] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_33565_hdfs____mdsy66/webapp 2023-06-07 22:57:03,302 INFO [Listener at localhost/37029] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33565 2023-06-07 22:57:03,304 WARN [Listener at localhost/37029] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:57:03,308 WARN [Listener at localhost/37029] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:57:03,308 WARN [Listener at localhost/37029] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:57:03,350 WARN [Listener at localhost/38639] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:03,360 WARN [Listener at localhost/38639] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:03,363 WARN [Listener at localhost/38639] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:03,364 INFO [Listener at localhost/38639] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:03,371 INFO [Listener at localhost/38639] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_44113_datanode____.imav46/webapp 2023-06-07 22:57:03,463 INFO [Listener at localhost/38639] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44113 2023-06-07 22:57:03,471 WARN [Listener at localhost/39513] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:03,485 WARN [Listener at localhost/39513] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:03,487 WARN [Listener at localhost/39513] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:03,489 INFO [Listener at localhost/39513] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:03,493 INFO [Listener at localhost/39513] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_41487_datanode____3kj6nx/webapp 2023-06-07 22:57:03,587 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x55d460eab6a4ccc2: Processing first storage report for DS-bcc3699b-5a64-41cf-aec8-10786f782306 from datanode a990a84f-5c04-4b37-bc31-2d73b27b282f 2023-06-07 22:57:03,587 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x55d460eab6a4ccc2: from storage DS-bcc3699b-5a64-41cf-aec8-10786f782306 node DatanodeRegistration(127.0.0.1:35915, datanodeUuid=a990a84f-5c04-4b37-bc31-2d73b27b282f, infoPort=34219, infoSecurePort=0, ipcPort=39513, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:03,587 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x55d460eab6a4ccc2: Processing first storage report for DS-67a55d6a-da38-4073-8bdb-bcf85466e589 from datanode a990a84f-5c04-4b37-bc31-2d73b27b282f 2023-06-07 22:57:03,587 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x55d460eab6a4ccc2: from storage DS-67a55d6a-da38-4073-8bdb-bcf85466e589 node DatanodeRegistration(127.0.0.1:35915, datanodeUuid=a990a84f-5c04-4b37-bc31-2d73b27b282f, infoPort=34219, infoSecurePort=0, ipcPort=39513, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:03,604 INFO [Listener at localhost/39513] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41487 2023-06-07 22:57:03,612 WARN [Listener at localhost/34723] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:03,670 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:57:03,720 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9f59e3d9f9918a61: Processing first storage report for DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836 from datanode fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9 2023-06-07 22:57:03,720 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9f59e3d9f9918a61: from storage DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836 node DatanodeRegistration(127.0.0.1:39021, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=40395, infoSecurePort=0, ipcPort=34723, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:03,720 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9f59e3d9f9918a61: Processing first storage report for DS-4dc09c13-fb7b-4609-9540-9e7447e8d983 from datanode fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9 2023-06-07 22:57:03,720 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9f59e3d9f9918a61: from storage DS-4dc09c13-fb7b-4609-9540-9e7447e8d983 node DatanodeRegistration(127.0.0.1:39021, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=40395, infoSecurePort=0, ipcPort=34723, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:03,721 DEBUG [Listener at localhost/34723] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780 2023-06-07 22:57:03,726 INFO [Listener at localhost/34723] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/zookeeper_0, clientPort=57222, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-07 22:57:03,727 INFO [Listener at localhost/34723] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57222 2023-06-07 22:57:03,728 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,728 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,748 INFO [Listener at localhost/34723] util.FSUtils(471): Created version file at hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719 with version=8 2023-06-07 22:57:03,748 INFO [Listener at localhost/34723] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase-staging 2023-06-07 22:57:03,749 INFO [Listener at localhost/34723] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:57:03,750 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:03,750 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:03,750 INFO [Listener at localhost/34723] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:57:03,750 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:03,750 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:57:03,750 INFO [Listener at localhost/34723] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:57:03,751 INFO [Listener at localhost/34723] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45615 2023-06-07 22:57:03,752 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,752 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,754 INFO [Listener at localhost/34723] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45615 connecting to ZooKeeper ensemble=127.0.0.1:57222 2023-06-07 22:57:03,760 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:456150x0, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:57:03,761 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45615-0x100a78273ee0000 connected 2023-06-07 22:57:03,783 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:57:03,783 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:03,784 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:57:03,784 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45615 2023-06-07 22:57:03,784 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45615 2023-06-07 22:57:03,784 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45615 2023-06-07 22:57:03,785 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45615 2023-06-07 22:57:03,785 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45615 2023-06-07 22:57:03,785 INFO [Listener at localhost/34723] master.HMaster(444): hbase.rootdir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719, hbase.cluster.distributed=false 2023-06-07 22:57:03,799 INFO [Listener at localhost/34723] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:57:03,799 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:03,799 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:03,799 INFO [Listener at localhost/34723] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:57:03,799 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:03,800 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:57:03,800 INFO [Listener at localhost/34723] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:57:03,801 INFO [Listener at localhost/34723] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46005 2023-06-07 22:57:03,801 INFO [Listener at localhost/34723] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 22:57:03,802 DEBUG [Listener at localhost/34723] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 22:57:03,803 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,804 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,806 INFO [Listener at localhost/34723] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46005 connecting to ZooKeeper ensemble=127.0.0.1:57222 2023-06-07 22:57:03,809 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:460050x0, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:57:03,810 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46005-0x100a78273ee0001 connected 2023-06-07 22:57:03,810 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:57:03,810 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:03,811 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:57:03,811 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46005 2023-06-07 22:57:03,811 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46005 2023-06-07 22:57:03,812 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46005 2023-06-07 22:57:03,812 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46005 2023-06-07 22:57:03,812 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46005 2023-06-07 22:57:03,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,815 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:57:03,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,819 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:57:03,819 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:57:03,819 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:03,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:57:03,821 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:57:03,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45615,1686178623749 from backup master directory 2023-06-07 22:57:03,823 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,823 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:57:03,823 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:57:03,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/hbase.id with ID: e118287c-5b09-438c-9b1f-4a9e83c12ec4 2023-06-07 22:57:03,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:03,856 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:03,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7918dd2e to 127.0.0.1:57222 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:03,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d849945, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:03,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:03,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-07 22:57:03,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:03,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store-tmp 2023-06-07 22:57:03,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:03,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:57:03,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:03,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:03,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:57:03,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:03,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:03,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:57:03,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45615%2C1686178623749, suffix=, logDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749, archiveDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/oldWALs, maxLogs=10 2023-06-07 22:57:03,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749/jenkins-hbase4.apache.org%2C45615%2C1686178623749.1686178623887 2023-06-07 22:57:03,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK], DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] 2023-06-07 22:57:03,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:03,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:03,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:03,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:03,896 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:03,898 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-07 22:57:03,898 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-07 22:57:03,899 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:03,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:03,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:03,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:03,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:03,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=871674, jitterRate=0.10839197039604187}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:57:03,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:57:03,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-07 22:57:03,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-07 22:57:03,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-07 22:57:03,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-07 22:57:03,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-07 22:57:03,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-07 22:57:03,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-07 22:57:03,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-07 22:57:03,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-07 22:57:03,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-07 22:57:03,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-07 22:57:03,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-07 22:57:03,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-07 22:57:03,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-07 22:57:03,930 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:03,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-07 22:57:03,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-07 22:57:03,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-07 22:57:03,934 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:03,934 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:03,934 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:03,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45615,1686178623749, sessionid=0x100a78273ee0000, setting cluster-up flag (Was=false) 2023-06-07 22:57:03,938 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:03,944 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-07 22:57:03,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,950 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:03,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-07 22:57:03,957 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:03,957 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.hbase-snapshot/.tmp 2023-06-07 22:57:03,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-07 22:57:03,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:03,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:03,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:03,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:03,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-07 22:57:03,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:03,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:57:03,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686178653965 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-07 22:57:03,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:03,971 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:57:03,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-07 22:57:03,971 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-07 22:57:03,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-07 22:57:03,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-07 22:57:03,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-07 22:57:03,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-07 22:57:03,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178623972,5,FailOnTimeoutGroup] 2023-06-07 22:57:03,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178623972,5,FailOnTimeoutGroup] 2023-06-07 22:57:03,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:03,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-07 22:57:03,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:03,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:03,973 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:03,990 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:03,991 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:03,991 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719 2023-06-07 22:57:04,002 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:04,003 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:57:04,005 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/info 2023-06-07 22:57:04,006 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:57:04,007 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,007 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:57:04,009 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:57:04,009 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:57:04,010 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,010 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:57:04,012 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/table 2023-06-07 22:57:04,012 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:57:04,013 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,014 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740 2023-06-07 22:57:04,015 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(951): ClusterId : e118287c-5b09-438c-9b1f-4a9e83c12ec4 2023-06-07 22:57:04,016 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 22:57:04,016 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740 2023-06-07 22:57:04,019 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:57:04,020 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:57:04,022 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:04,023 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 22:57:04,023 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=822704, jitterRate=0.04612267017364502}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:57:04,023 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 22:57:04,023 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:57:04,023 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:57:04,023 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:57:04,023 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:57:04,023 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:57:04,023 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:57:04,024 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:57:04,024 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:57:04,025 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 22:57:04,026 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:57:04,027 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-07 22:57:04,026 DEBUG [RS:0;jenkins-hbase4:46005] zookeeper.ReadOnlyZKClient(139): Connect 0x0a944537 to 127.0.0.1:57222 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:04,027 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-07 22:57:04,030 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-07 22:57:04,032 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-07 22:57:04,032 DEBUG [RS:0;jenkins-hbase4:46005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@453aa15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:04,032 DEBUG [RS:0;jenkins-hbase4:46005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c2e47, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:57:04,042 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46005 2023-06-07 22:57:04,042 INFO [RS:0;jenkins-hbase4:46005] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 22:57:04,042 INFO [RS:0;jenkins-hbase4:46005] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 22:57:04,042 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 22:57:04,043 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,45615,1686178623749 with isa=jenkins-hbase4.apache.org/172.31.14.131:46005, startcode=1686178623798 2023-06-07 22:57:04,043 DEBUG [RS:0;jenkins-hbase4:46005] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 22:57:04,046 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58991, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 22:57:04,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,048 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719 2023-06-07 22:57:04,048 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38639 2023-06-07 22:57:04,048 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 22:57:04,050 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:04,051 DEBUG [RS:0;jenkins-hbase4:46005] zookeeper.ZKUtil(162): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,051 WARN [RS:0;jenkins-hbase4:46005] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:57:04,051 INFO [RS:0;jenkins-hbase4:46005] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:04,051 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1946): logDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,051 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46005,1686178623798] 2023-06-07 22:57:04,054 DEBUG [RS:0;jenkins-hbase4:46005] zookeeper.ZKUtil(162): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,056 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 22:57:04,056 INFO [RS:0;jenkins-hbase4:46005] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 22:57:04,058 INFO [RS:0;jenkins-hbase4:46005] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 22:57:04,059 INFO [RS:0;jenkins-hbase4:46005] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:57:04,059 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,063 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 22:57:04,064 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,065 DEBUG [RS:0;jenkins-hbase4:46005] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:04,066 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,066 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,066 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,078 INFO [RS:0;jenkins-hbase4:46005] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 22:57:04,078 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46005,1686178623798-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,097 INFO [RS:0;jenkins-hbase4:46005] regionserver.Replication(203): jenkins-hbase4.apache.org,46005,1686178623798 started 2023-06-07 22:57:04,097 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46005,1686178623798, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46005, sessionid=0x100a78273ee0001 2023-06-07 22:57:04,097 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 22:57:04,097 DEBUG [RS:0;jenkins-hbase4:46005] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,097 DEBUG [RS:0;jenkins-hbase4:46005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46005,1686178623798' 2023-06-07 22:57:04,097 DEBUG [RS:0;jenkins-hbase4:46005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:57:04,098 DEBUG [RS:0;jenkins-hbase4:46005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:57:04,098 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 22:57:04,098 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 22:57:04,098 DEBUG [RS:0;jenkins-hbase4:46005] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,098 DEBUG [RS:0;jenkins-hbase4:46005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46005,1686178623798' 2023-06-07 22:57:04,098 DEBUG [RS:0;jenkins-hbase4:46005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 22:57:04,099 DEBUG [RS:0;jenkins-hbase4:46005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 22:57:04,099 DEBUG [RS:0;jenkins-hbase4:46005] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 22:57:04,099 INFO [RS:0;jenkins-hbase4:46005] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 22:57:04,100 INFO [RS:0;jenkins-hbase4:46005] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 22:57:04,182 DEBUG [jenkins-hbase4:45615] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-07 22:57:04,183 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46005,1686178623798, state=OPENING 2023-06-07 22:57:04,185 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-07 22:57:04,187 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:04,187 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:57:04,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46005,1686178623798}] 2023-06-07 22:57:04,202 INFO [RS:0;jenkins-hbase4:46005] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46005%2C1686178623798, suffix=, logDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798, archiveDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/oldWALs, maxLogs=32 2023-06-07 22:57:04,213 INFO [RS:0;jenkins-hbase4:46005] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.1686178624203 2023-06-07 22:57:04,214 DEBUG [RS:0;jenkins-hbase4:46005] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK], DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]] 2023-06-07 22:57:04,342 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,342 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 22:57:04,344 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46768, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 22:57:04,349 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-07 22:57:04,349 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:04,351 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta, suffix=.meta, logDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798, archiveDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/oldWALs, maxLogs=32 2023-06-07 22:57:04,362 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta.1686178624353.meta 2023-06-07 22:57:04,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK], DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] 2023-06-07 22:57:04,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:04,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-07 22:57:04,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-07 22:57:04,364 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-07 22:57:04,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-07 22:57:04,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:04,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-07 22:57:04,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-07 22:57:04,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:57:04,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/info 2023-06-07 22:57:04,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/info 2023-06-07 22:57:04,368 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:57:04,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:57:04,370 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:57:04,370 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:57:04,370 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:57:04,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:57:04,372 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/table 2023-06-07 22:57:04,372 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740/table 2023-06-07 22:57:04,374 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:57:04,375 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,376 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740 2023-06-07 22:57:04,377 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/meta/1588230740 2023-06-07 22:57:04,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:57:04,382 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:57:04,383 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=860063, jitterRate=0.0936269760131836}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:57:04,383 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:57:04,385 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686178624342 2023-06-07 22:57:04,388 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-07 22:57:04,389 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-07 22:57:04,389 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46005,1686178623798, state=OPEN 2023-06-07 22:57:04,391 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-07 22:57:04,392 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:57:04,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-07 22:57:04,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46005,1686178623798 in 205 msec 2023-06-07 22:57:04,397 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-07 22:57:04,397 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 368 msec 2023-06-07 22:57:04,400 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 440 msec 2023-06-07 22:57:04,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686178624400, completionTime=-1 2023-06-07 22:57:04,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-07 22:57:04,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-07 22:57:04,403 DEBUG [hconnection-0x7e39a6a7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:57:04,405 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46780, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:57:04,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-07 22:57:04,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686178684406 2023-06-07 22:57:04,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686178744406 2023-06-07 22:57:04,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-06-07 22:57:04,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45615,1686178623749-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45615,1686178623749-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45615,1686178623749-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45615, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:04,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-07 22:57:04,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:04,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-07 22:57:04,415 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-07 22:57:04,416 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:57:04,418 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:57:04,419 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,420 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386 empty. 2023-06-07 22:57:04,421 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,421 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-07 22:57:04,433 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:04,434 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9b891e27dca706198bb1ad0b6b11f386, NAME => 'hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp 2023-06-07 22:57:04,443 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:04,443 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9b891e27dca706198bb1ad0b6b11f386, disabling compactions & flushes 2023-06-07 22:57:04,443 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,443 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,443 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. after waiting 0 ms 2023-06-07 22:57:04,443 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,443 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,443 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9b891e27dca706198bb1ad0b6b11f386: 2023-06-07 22:57:04,446 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:57:04,447 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178624447"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178624447"}]},"ts":"1686178624447"} 2023-06-07 22:57:04,450 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:57:04,451 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:57:04,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178624451"}]},"ts":"1686178624451"} 2023-06-07 22:57:04,453 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-07 22:57:04,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9b891e27dca706198bb1ad0b6b11f386, ASSIGN}] 2023-06-07 22:57:04,461 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9b891e27dca706198bb1ad0b6b11f386, ASSIGN 2023-06-07 22:57:04,462 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9b891e27dca706198bb1ad0b6b11f386, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46005,1686178623798; forceNewPlan=false, retain=false 2023-06-07 22:57:04,612 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9b891e27dca706198bb1ad0b6b11f386, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,613 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178624612"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178624612"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178624612"}]},"ts":"1686178624612"} 2023-06-07 22:57:04,615 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 9b891e27dca706198bb1ad0b6b11f386, server=jenkins-hbase4.apache.org,46005,1686178623798}] 2023-06-07 22:57:04,773 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,773 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9b891e27dca706198bb1ad0b6b11f386, NAME => 'hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:04,773 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:04,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,775 INFO [StoreOpener-9b891e27dca706198bb1ad0b6b11f386-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,777 DEBUG [StoreOpener-9b891e27dca706198bb1ad0b6b11f386-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386/info 2023-06-07 22:57:04,777 DEBUG [StoreOpener-9b891e27dca706198bb1ad0b6b11f386-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386/info 2023-06-07 22:57:04,777 INFO [StoreOpener-9b891e27dca706198bb1ad0b6b11f386-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9b891e27dca706198bb1ad0b6b11f386 columnFamilyName info 2023-06-07 22:57:04,778 INFO [StoreOpener-9b891e27dca706198bb1ad0b6b11f386-1] regionserver.HStore(310): Store=9b891e27dca706198bb1ad0b6b11f386/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:04,779 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,784 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:04,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/hbase/namespace/9b891e27dca706198bb1ad0b6b11f386/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:04,787 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9b891e27dca706198bb1ad0b6b11f386; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=757097, jitterRate=-0.03730231523513794}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:57:04,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9b891e27dca706198bb1ad0b6b11f386: 2023-06-07 22:57:04,789 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386., pid=6, masterSystemTime=1686178624768 2023-06-07 22:57:04,792 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,792 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:04,793 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9b891e27dca706198bb1ad0b6b11f386, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,793 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178624792"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178624792"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178624792"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178624792"}]},"ts":"1686178624792"} 2023-06-07 22:57:04,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-07 22:57:04,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 9b891e27dca706198bb1ad0b6b11f386, server=jenkins-hbase4.apache.org,46005,1686178623798 in 180 msec 2023-06-07 22:57:04,801 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-07 22:57:04,801 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9b891e27dca706198bb1ad0b6b11f386, ASSIGN in 339 msec 2023-06-07 22:57:04,802 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:57:04,802 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178624802"}]},"ts":"1686178624802"} 2023-06-07 22:57:04,804 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-07 22:57:04,807 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:57:04,810 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 394 msec 2023-06-07 22:57:04,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-07 22:57:04,817 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:57:04,817 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:04,822 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-07 22:57:04,831 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:57:04,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-06-07 22:57:04,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-07 22:57:04,853 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:57:04,856 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-07 22:57:04,868 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-07 22:57:04,870 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-07 22:57:04,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.047sec 2023-06-07 22:57:04,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-07 22:57:04,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-07 22:57:04,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-07 22:57:04,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45615,1686178623749-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-07 22:57:04,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45615,1686178623749-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-07 22:57:04,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-07 22:57:04,914 DEBUG [Listener at localhost/34723] zookeeper.ReadOnlyZKClient(139): Connect 0x07dff068 to 127.0.0.1:57222 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:04,918 DEBUG [Listener at localhost/34723] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6220d06e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:04,920 DEBUG [hconnection-0x4ca951f2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:57:04,922 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46784, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:57:04,924 INFO [Listener at localhost/34723] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:04,924 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:04,929 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-07 22:57:04,929 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:04,930 INFO [Listener at localhost/34723] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:57:04,943 INFO [Listener at localhost/34723] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:57:04,945 INFO [Listener at localhost/34723] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44917 2023-06-07 22:57:04,945 INFO [Listener at localhost/34723] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 22:57:04,946 DEBUG [Listener at localhost/34723] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 22:57:04,947 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:04,948 INFO [Listener at localhost/34723] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:04,949 INFO [Listener at localhost/34723] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44917 connecting to ZooKeeper ensemble=127.0.0.1:57222 2023-06-07 22:57:04,953 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:449170x0, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:57:04,954 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(162): regionserver:449170x0, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:57:04,954 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44917-0x100a78273ee0005 connected 2023-06-07 22:57:04,955 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(162): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-07 22:57:04,955 DEBUG [Listener at localhost/34723] zookeeper.ZKUtil(164): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:57:04,956 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44917 2023-06-07 22:57:04,958 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44917 2023-06-07 22:57:04,961 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44917 2023-06-07 22:57:04,962 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44917 2023-06-07 22:57:04,962 DEBUG [Listener at localhost/34723] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44917 2023-06-07 22:57:04,964 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(951): ClusterId : e118287c-5b09-438c-9b1f-4a9e83c12ec4 2023-06-07 22:57:04,964 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 22:57:04,967 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 22:57:04,967 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 22:57:04,968 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 22:57:04,969 DEBUG [RS:1;jenkins-hbase4:44917] zookeeper.ReadOnlyZKClient(139): Connect 0x778e6d83 to 127.0.0.1:57222 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:04,980 DEBUG [RS:1;jenkins-hbase4:44917] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79499d02, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:04,981 DEBUG [RS:1;jenkins-hbase4:44917] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d7606f3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:57:04,989 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44917 2023-06-07 22:57:04,989 INFO [RS:1;jenkins-hbase4:44917] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 22:57:04,990 INFO [RS:1;jenkins-hbase4:44917] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 22:57:04,990 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 22:57:04,990 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,45615,1686178623749 with isa=jenkins-hbase4.apache.org/172.31.14.131:44917, startcode=1686178624942 2023-06-07 22:57:04,990 DEBUG [RS:1;jenkins-hbase4:44917] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 22:57:04,994 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53093, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 22:57:04,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:04,995 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719 2023-06-07 22:57:04,995 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38639 2023-06-07 22:57:04,995 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 22:57:04,997 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:04,997 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:04,997 DEBUG [RS:1;jenkins-hbase4:44917] zookeeper.ZKUtil(162): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:04,997 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44917,1686178624942] 2023-06-07 22:57:04,997 WARN [RS:1;jenkins-hbase4:44917] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:57:04,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:04,998 INFO [RS:1;jenkins-hbase4:44917] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:04,998 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1946): logDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:04,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,003 DEBUG [RS:1;jenkins-hbase4:44917] zookeeper.ZKUtil(162): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:05,003 DEBUG [RS:1;jenkins-hbase4:44917] zookeeper.ZKUtil(162): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,004 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 22:57:05,005 INFO [RS:1;jenkins-hbase4:44917] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 22:57:05,007 INFO [RS:1;jenkins-hbase4:44917] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 22:57:05,008 INFO [RS:1;jenkins-hbase4:44917] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:57:05,008 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:05,008 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 22:57:05,009 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:05,009 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,010 DEBUG [RS:1;jenkins-hbase4:44917] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:05,011 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:05,011 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:05,011 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:05,022 INFO [RS:1;jenkins-hbase4:44917] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 22:57:05,022 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44917,1686178624942-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:05,032 INFO [RS:1;jenkins-hbase4:44917] regionserver.Replication(203): jenkins-hbase4.apache.org,44917,1686178624942 started 2023-06-07 22:57:05,032 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44917,1686178624942, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44917, sessionid=0x100a78273ee0005 2023-06-07 22:57:05,033 INFO [Listener at localhost/34723] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:44917,5,FailOnTimeoutGroup] 2023-06-07 22:57:05,033 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 22:57:05,033 INFO [Listener at localhost/34723] wal.TestLogRolling(323): Replication=2 2023-06-07 22:57:05,033 DEBUG [RS:1;jenkins-hbase4:44917] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,033 DEBUG [RS:1;jenkins-hbase4:44917] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44917,1686178624942' 2023-06-07 22:57:05,034 DEBUG [RS:1;jenkins-hbase4:44917] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:57:05,034 DEBUG [RS:1;jenkins-hbase4:44917] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:57:05,035 DEBUG [Listener at localhost/34723] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-07 22:57:05,035 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 22:57:05,035 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 22:57:05,035 DEBUG [RS:1;jenkins-hbase4:44917] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,036 DEBUG [RS:1;jenkins-hbase4:44917] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44917,1686178624942' 2023-06-07 22:57:05,036 DEBUG [RS:1;jenkins-hbase4:44917] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 22:57:05,036 DEBUG [RS:1;jenkins-hbase4:44917] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 22:57:05,037 DEBUG [RS:1;jenkins-hbase4:44917] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 22:57:05,037 INFO [RS:1;jenkins-hbase4:44917] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 22:57:05,037 INFO [RS:1;jenkins-hbase4:44917] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 22:57:05,038 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49854, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-07 22:57:05,040 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-07 22:57:05,040 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-07 22:57:05,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:05,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-07 22:57:05,044 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:57:05,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-07 22:57:05,045 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:57:05,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:57:05,047 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,048 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce empty. 2023-06-07 22:57:05,048 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,049 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-07 22:57:05,062 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:05,063 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1dac4ba4ee44ff9e131f583c120c6fce, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/.tmp 2023-06-07 22:57:05,073 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:05,073 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 1dac4ba4ee44ff9e131f583c120c6fce, disabling compactions & flushes 2023-06-07 22:57:05,073 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,073 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,073 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. after waiting 0 ms 2023-06-07 22:57:05,073 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,073 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,074 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 1dac4ba4ee44ff9e131f583c120c6fce: 2023-06-07 22:57:05,076 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:57:05,078 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686178625078"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178625078"}]},"ts":"1686178625078"} 2023-06-07 22:57:05,080 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:57:05,081 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:57:05,081 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178625081"}]},"ts":"1686178625081"} 2023-06-07 22:57:05,083 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-07 22:57:05,090 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-06-07 22:57:05,092 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-07 22:57:05,092 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-07 22:57:05,092 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-07 22:57:05,093 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=1dac4ba4ee44ff9e131f583c120c6fce, ASSIGN}] 2023-06-07 22:57:05,094 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=1dac4ba4ee44ff9e131f583c120c6fce, ASSIGN 2023-06-07 22:57:05,096 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=1dac4ba4ee44ff9e131f583c120c6fce, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44917,1686178624942; forceNewPlan=false, retain=false 2023-06-07 22:57:05,140 INFO [RS:1;jenkins-hbase4:44917] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44917%2C1686178624942, suffix=, logDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942, archiveDir=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/oldWALs, maxLogs=32 2023-06-07 22:57:05,152 INFO [RS:1;jenkins-hbase4:44917] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141 2023-06-07 22:57:05,152 DEBUG [RS:1;jenkins-hbase4:44917] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK], DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]] 2023-06-07 22:57:05,248 INFO [jenkins-hbase4:45615] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-07 22:57:05,249 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=1dac4ba4ee44ff9e131f583c120c6fce, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,249 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686178625249"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178625249"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178625249"}]},"ts":"1686178625249"} 2023-06-07 22:57:05,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 1dac4ba4ee44ff9e131f583c120c6fce, server=jenkins-hbase4.apache.org,44917,1686178624942}] 2023-06-07 22:57:05,405 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,405 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 22:57:05,408 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32858, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 22:57:05,412 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1dac4ba4ee44ff9e131f583c120c6fce, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:05,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:05,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,415 INFO [StoreOpener-1dac4ba4ee44ff9e131f583c120c6fce-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,417 DEBUG [StoreOpener-1dac4ba4ee44ff9e131f583c120c6fce-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info 2023-06-07 22:57:05,417 DEBUG [StoreOpener-1dac4ba4ee44ff9e131f583c120c6fce-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info 2023-06-07 22:57:05,417 INFO [StoreOpener-1dac4ba4ee44ff9e131f583c120c6fce-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1dac4ba4ee44ff9e131f583c120c6fce columnFamilyName info 2023-06-07 22:57:05,418 INFO [StoreOpener-1dac4ba4ee44ff9e131f583c120c6fce-1] regionserver.HStore(310): Store=1dac4ba4ee44ff9e131f583c120c6fce/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:05,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:05,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:05,428 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1dac4ba4ee44ff9e131f583c120c6fce; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=757661, jitterRate=-0.03658515214920044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:57:05,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1dac4ba4ee44ff9e131f583c120c6fce: 2023-06-07 22:57:05,430 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce., pid=11, masterSystemTime=1686178625405 2023-06-07 22:57:05,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:05,434 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=1dac4ba4ee44ff9e131f583c120c6fce, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:05,434 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686178625434"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178625434"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178625434"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178625434"}]},"ts":"1686178625434"} 2023-06-07 22:57:05,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-07 22:57:05,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 1dac4ba4ee44ff9e131f583c120c6fce, server=jenkins-hbase4.apache.org,44917,1686178624942 in 184 msec 2023-06-07 22:57:05,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-07 22:57:05,443 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=1dac4ba4ee44ff9e131f583c120c6fce, ASSIGN in 346 msec 2023-06-07 22:57:05,444 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:57:05,444 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178625444"}]},"ts":"1686178625444"} 2023-06-07 22:57:05,446 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-07 22:57:05,449 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:57:05,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 408 msec 2023-06-07 22:57:07,910 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-07 22:57:10,056 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-07 22:57:10,056 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-07 22:57:11,005 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-07 22:57:15,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:57:15,047 INFO [Listener at localhost/34723] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-07 22:57:15,050 DEBUG [Listener at localhost/34723] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-07 22:57:15,051 DEBUG [Listener at localhost/34723] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:15,063 WARN [Listener at localhost/34723] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:15,066 WARN [Listener at localhost/34723] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:15,067 INFO [Listener at localhost/34723] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:15,071 INFO [Listener at localhost/34723] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_41043_datanode____rq51x5/webapp 2023-06-07 22:57:15,163 INFO [Listener at localhost/34723] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41043 2023-06-07 22:57:15,172 WARN [Listener at localhost/32937] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:15,193 WARN [Listener at localhost/32937] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:15,195 WARN [Listener at localhost/32937] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:15,196 INFO [Listener at localhost/32937] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:15,200 INFO [Listener at localhost/32937] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_34193_datanode____y6844x/webapp 2023-06-07 22:57:15,277 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb8a1a6f18baf0d5d: Processing first storage report for DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef from datanode 25aff837-2008-42bd-bef9-6e9bccb98f0a 2023-06-07 22:57:15,277 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb8a1a6f18baf0d5d: from storage DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef node DatanodeRegistration(127.0.0.1:39577, datanodeUuid=25aff837-2008-42bd-bef9-6e9bccb98f0a, infoPort=38277, infoSecurePort=0, ipcPort=32937, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:15,277 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb8a1a6f18baf0d5d: Processing first storage report for DS-edabd6ec-f8e8-40d6-9c50-08baa0f57baf from datanode 25aff837-2008-42bd-bef9-6e9bccb98f0a 2023-06-07 22:57:15,277 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb8a1a6f18baf0d5d: from storage DS-edabd6ec-f8e8-40d6-9c50-08baa0f57baf node DatanodeRegistration(127.0.0.1:39577, datanodeUuid=25aff837-2008-42bd-bef9-6e9bccb98f0a, infoPort=38277, infoSecurePort=0, ipcPort=32937, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:15,306 INFO [Listener at localhost/32937] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34193 2023-06-07 22:57:15,314 WARN [Listener at localhost/39977] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:15,332 WARN [Listener at localhost/39977] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:15,334 WARN [Listener at localhost/39977] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:15,335 INFO [Listener at localhost/39977] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:15,340 INFO [Listener at localhost/39977] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_34859_datanode____.2bxz2/webapp 2023-06-07 22:57:15,410 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4592726bddddc4c6: Processing first storage report for DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c from datanode 6aea27ae-ffc0-48e0-89f6-2b43136fbc78 2023-06-07 22:57:15,410 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4592726bddddc4c6: from storage DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c node DatanodeRegistration(127.0.0.1:40179, datanodeUuid=6aea27ae-ffc0-48e0-89f6-2b43136fbc78, infoPort=40237, infoSecurePort=0, ipcPort=39977, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:15,410 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4592726bddddc4c6: Processing first storage report for DS-62cb00d0-5cfa-4159-bc82-92352376607b from datanode 6aea27ae-ffc0-48e0-89f6-2b43136fbc78 2023-06-07 22:57:15,410 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4592726bddddc4c6: from storage DS-62cb00d0-5cfa-4159-bc82-92352376607b node DatanodeRegistration(127.0.0.1:40179, datanodeUuid=6aea27ae-ffc0-48e0-89f6-2b43136fbc78, infoPort=40237, infoSecurePort=0, ipcPort=39977, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:15,443 INFO [Listener at localhost/39977] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34859 2023-06-07 22:57:15,451 WARN [Listener at localhost/41319] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:15,560 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3e0b479bca6c5c35: Processing first storage report for DS-4ccfbf03-2124-41b1-a1b2-f01295488710 from datanode 65ce2998-e88d-4bde-beb2-137d8cf0e4af 2023-06-07 22:57:15,560 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3e0b479bca6c5c35: from storage DS-4ccfbf03-2124-41b1-a1b2-f01295488710 node DatanodeRegistration(127.0.0.1:32835, datanodeUuid=65ce2998-e88d-4bde-beb2-137d8cf0e4af, infoPort=33949, infoSecurePort=0, ipcPort=41319, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:15,560 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3e0b479bca6c5c35: Processing first storage report for DS-6ed3733c-96de-4250-b819-fb54a8c9e859 from datanode 65ce2998-e88d-4bde-beb2-137d8cf0e4af 2023-06-07 22:57:15,560 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3e0b479bca6c5c35: from storage DS-6ed3733c-96de-4250-b819-fb54a8c9e859 node DatanodeRegistration(127.0.0.1:32835, datanodeUuid=65ce2998-e88d-4bde-beb2-137d8cf0e4af, infoPort=33949, infoSecurePort=0, ipcPort=41319, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:15,659 WARN [Listener at localhost/41319] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:15,660 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:15,663 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-07 22:57:15,662 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-07 22:57:15,661 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:15,664 WARN [DataStreamer for file /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141 block BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK], DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]) is bad. 2023-06-07 22:57:15,664 WARN [DataStreamer for file /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta.1686178624353.meta block BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK], DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]) is bad. 2023-06-07 22:57:15,664 WARN [PacketResponder: BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39021]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,664 WARN [PacketResponder: BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39021]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,664 WARN [DataStreamer for file /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.1686178624203 block BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK], DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]) is bad. 2023-06-07 22:57:15,664 WARN [DataStreamer for file /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749/jenkins-hbase4.apache.org%2C45615%2C1686178623749.1686178623887 block BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK], DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]) is bad. 2023-06-07 22:57:15,670 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1988794237_17 at /127.0.0.1:56838 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56838 dst: /127.0.0.1:35915 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,673 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:56886 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56886 dst: /127.0.0.1:35915 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,674 INFO [Listener at localhost/41319] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:15,677 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1988794237_17 at /127.0.0.1:56844 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56844 dst: /127.0.0.1:35915 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35915 remote=/127.0.0.1:56844]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,678 WARN [PacketResponder: BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35915]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,677 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-827504819_17 at /127.0.0.1:56816 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56816 dst: /127.0.0.1:35915 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35915 remote=/127.0.0.1:56816]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,677 WARN [PacketResponder: BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35915]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,681 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-827504819_17 at /127.0.0.1:35290 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39021:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35290 dst: /127.0.0.1:39021 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,682 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1988794237_17 at /127.0.0.1:35318 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39021:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35318 dst: /127.0.0.1:39021 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,720 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-373150315-172.31.14.131-1686178623141 (Datanode Uuid fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9) service to localhost/127.0.0.1:38639 2023-06-07 22:57:15,721 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data3/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:15,721 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data4/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:15,777 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1988794237_17 at /127.0.0.1:35310 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39021:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35310 dst: /127.0.0.1:39021 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,778 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:35362 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:39021:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35362 dst: /127.0.0.1:39021 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,780 WARN [Listener at localhost/41319] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:15,780 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:15,781 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:15,781 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:15,781 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:15,786 INFO [Listener at localhost/41319] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:15,889 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:33666 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33666 dst: /127.0.0.1:35915 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,890 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:15,890 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-827504819_17 at /127.0.0.1:33646 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33646 dst: /127.0.0.1:35915 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,889 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1988794237_17 at /127.0.0.1:33678 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33678 dst: /127.0.0.1:35915 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,889 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1988794237_17 at /127.0.0.1:33644 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35915:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33644 dst: /127.0.0.1:35915 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:15,891 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-373150315-172.31.14.131-1686178623141 (Datanode Uuid a990a84f-5c04-4b37-bc31-2d73b27b282f) service to localhost/127.0.0.1:38639 2023-06-07 22:57:15,893 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data1/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:15,893 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data2/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:15,898 DEBUG [Listener at localhost/41319] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:57:15,900 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43308, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:57:15,901 WARN [RS:1;jenkins-hbase4:44917.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:15,901 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44917%2C1686178624942:(num 1686178625141) roll requested 2023-06-07 22:57:15,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44917] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:15,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44917] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:43308 deadline: 1686178645900, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-07 22:57:15,910 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-07 22:57:15,911 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141 with entries=1, filesize=466 B; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 2023-06-07 22:57:15,912 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK], DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK]] 2023-06-07 22:57:15,912 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:15,912 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141 is not closed yet, will try archiving it next time 2023-06-07 22:57:15,912 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:15,913 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141 to hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/oldWALs/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178625141 2023-06-07 22:57:28,035 INFO [Listener at localhost/41319] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 2023-06-07 22:57:28,036 WARN [Listener at localhost/41319] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:28,037 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:28,038 WARN [DataStreamer for file /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 block BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK], DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK]) is bad. 2023-06-07 22:57:28,041 INFO [Listener at localhost/41319] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:28,043 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:60682 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:40179:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60682 dst: /127.0.0.1:40179 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40179 remote=/127.0.0.1:60682]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:28,043 WARN [PacketResponder: BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40179]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:28,044 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:54038 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:39577:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54038 dst: /127.0.0.1:39577 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:28,146 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:28,146 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-373150315-172.31.14.131-1686178623141 (Datanode Uuid 25aff837-2008-42bd-bef9-6e9bccb98f0a) service to localhost/127.0.0.1:38639 2023-06-07 22:57:28,146 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data5/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:28,147 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data6/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:28,151 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK]] 2023-06-07 22:57:28,151 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK]] 2023-06-07 22:57:28,151 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44917%2C1686178624942:(num 1686178635902) roll requested 2023-06-07 22:57:28,156 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:34868 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741840_1021]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data8/current]'}, localName='127.0.0.1:40179', datanodeUuid='6aea27ae-ffc0-48e0-89f6-2b43136fbc78', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741840_1021 to mirror 127.0.0.1:39577: java.net.ConnectException: Connection refused 2023-06-07 22:57:28,156 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741840_1021 2023-06-07 22:57:28,156 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:34868 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:40179:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34868 dst: /127.0.0.1:40179 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:28,159 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK] 2023-06-07 22:57:28,163 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741841_1022 2023-06-07 22:57:28,163 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK] 2023-06-07 22:57:28,166 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:34880 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741842_1023]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data8/current]'}, localName='127.0.0.1:40179', datanodeUuid='6aea27ae-ffc0-48e0-89f6-2b43136fbc78', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741842_1023 to mirror 127.0.0.1:35915: java.net.ConnectException: Connection refused 2023-06-07 22:57:28,166 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741842_1023 2023-06-07 22:57:28,166 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:34880 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:40179:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34880 dst: /127.0.0.1:40179 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:28,166 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK] 2023-06-07 22:57:28,172 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178648151 2023-06-07 22:57:28,172 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK], DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:28,172 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 is not closed yet, will try archiving it next time 2023-06-07 22:57:30,424 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@42a99d53] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40179, datanodeUuid=6aea27ae-ffc0-48e0-89f6-2b43136fbc78, infoPort=40237, infoSecurePort=0, ipcPort=39977, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741839_1020 to 127.0.0.1:39577 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,157 WARN [Listener at localhost/41319] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:32,158 WARN [ResponseProcessor for block BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:57:32,159 WARN [DataStreamer for file /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178648151 block BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024] hdfs.DataStreamer(1548): Error Recovery for BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK], DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK]) is bad. 2023-06-07 22:57:32,163 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:54370 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:32835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54370 dst: /127.0.0.1:32835 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:32835 remote=/127.0.0.1:54370]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,163 INFO [Listener at localhost/41319] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:32,163 WARN [PacketResponder: BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:32835]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,165 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:34882 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:40179:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34882 dst: /127.0.0.1:40179 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,269 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:32,269 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-373150315-172.31.14.131-1686178623141 (Datanode Uuid 6aea27ae-ffc0-48e0-89f6-2b43136fbc78) service to localhost/127.0.0.1:38639 2023-06-07 22:57:32,270 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data7/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:32,270 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data8/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:32,275 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:32,275 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:32,275 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44917%2C1686178624942:(num 1686178648151) roll requested 2023-06-07 22:57:32,278 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741844_1026 2023-06-07 22:57:32,279 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK] 2023-06-07 22:57:32,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44917] regionserver.HRegion(9158): Flush requested on 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:32,280 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1dac4ba4ee44ff9e131f583c120c6fce 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 22:57:32,281 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741845_1027 2023-06-07 22:57:32,281 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK] 2023-06-07 22:57:32,283 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741846_1028 2023-06-07 22:57:32,284 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:32,288 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:48994 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data10/current]'}, localName='127.0.0.1:32835', datanodeUuid='65ce2998-e88d-4bde-beb2-137d8cf0e4af', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741847_1029 to mirror 127.0.0.1:35915: java.net.ConnectException: Connection refused 2023-06-07 22:57:32,288 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741847_1029 2023-06-07 22:57:32,288 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:48994 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:32835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48994 dst: /127.0.0.1:32835 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,288 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:48998 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741848_1030]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data10/current]'}, localName='127.0.0.1:32835', datanodeUuid='65ce2998-e88d-4bde-beb2-137d8cf0e4af', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741848_1030 to mirror 127.0.0.1:40179: java.net.ConnectException: Connection refused 2023-06-07 22:57:32,288 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741848_1030 2023-06-07 22:57:32,289 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK] 2023-06-07 22:57:32,289 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:48998 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741848_1030]] datanode.DataXceiver(323): 127.0.0.1:32835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48998 dst: /127.0.0.1:32835 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,289 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:32,290 WARN [IPC Server handler 1 on default port 38639] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-07 22:57:32,290 WARN [IPC Server handler 1 on default port 38639] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-07 22:57:32,290 WARN [IPC Server handler 1 on default port 38639] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-07 22:57:32,291 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:49008 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741849_1031]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data10/current]'}, localName='127.0.0.1:32835', datanodeUuid='65ce2998-e88d-4bde-beb2-137d8cf0e4af', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741849_1031 to mirror 127.0.0.1:39577: java.net.ConnectException: Connection refused 2023-06-07 22:57:32,291 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741849_1031 2023-06-07 22:57:32,291 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:49008 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741849_1031]] datanode.DataXceiver(323): 127.0.0.1:32835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49008 dst: /127.0.0.1:32835 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,295 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK] 2023-06-07 22:57:32,296 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178648151 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178652275 2023-06-07 22:57:32,297 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:32,297 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178648151 is not closed yet, will try archiving it next time 2023-06-07 22:57:32,297 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:49022 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data10/current]'}, localName='127.0.0.1:32835', datanodeUuid='65ce2998-e88d-4bde-beb2-137d8cf0e4af', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741851_1033 to mirror 127.0.0.1:35915: java.net.ConnectException: Connection refused 2023-06-07 22:57:32,297 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741851_1033 2023-06-07 22:57:32,297 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:49022 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:32835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49022 dst: /127.0.0.1:32835 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,298 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK] 2023-06-07 22:57:32,300 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:49032 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741852_1034]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data10/current]'}, localName='127.0.0.1:32835', datanodeUuid='65ce2998-e88d-4bde-beb2-137d8cf0e4af', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741852_1034 to mirror 127.0.0.1:39021: java.net.ConnectException: Connection refused 2023-06-07 22:57:32,300 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741852_1034 2023-06-07 22:57:32,300 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:49032 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741852_1034]] datanode.DataXceiver(323): 127.0.0.1:32835:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49032 dst: /127.0.0.1:32835 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:32,301 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK] 2023-06-07 22:57:32,301 WARN [IPC Server handler 3 on default port 38639] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-07 22:57:32,302 WARN [IPC Server handler 3 on default port 38639] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-07 22:57:32,302 WARN [IPC Server handler 3 on default port 38639] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-07 22:57:32,493 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:32,493 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:32,493 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44917%2C1686178624942:(num 1686178652275) roll requested 2023-06-07 22:57:32,496 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741854_1036 2023-06-07 22:57:32,497 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:32,498 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741855_1037 2023-06-07 22:57:32,499 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK] 2023-06-07 22:57:32,500 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741856_1038 2023-06-07 22:57:32,500 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39021,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK] 2023-06-07 22:57:32,502 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741857_1039 2023-06-07 22:57:32,502 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39577,DS-2b6d44b6-a5af-41b7-8ff9-0c8b64b85cef,DISK] 2023-06-07 22:57:32,503 WARN [IPC Server handler 2 on default port 38639] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-07 22:57:32,503 WARN [IPC Server handler 2 on default port 38639] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-07 22:57:32,503 WARN [IPC Server handler 2 on default port 38639] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-07 22:57:32,507 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178652275 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178652493 2023-06-07 22:57:32,507 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:32,507 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178652275 is not closed yet, will try archiving it next time 2023-06-07 22:57:32,696 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-07 22:57:32,706 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/.tmp/info/a5ed8f8bc6f7413aa822308b154e2a60 2023-06-07 22:57:32,715 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/.tmp/info/a5ed8f8bc6f7413aa822308b154e2a60 as hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/a5ed8f8bc6f7413aa822308b154e2a60 2023-06-07 22:57:32,720 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/a5ed8f8bc6f7413aa822308b154e2a60, entries=5, sequenceid=12, filesize=10.0 K 2023-06-07 22:57:32,721 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 1dac4ba4ee44ff9e131f583c120c6fce in 441ms, sequenceid=12, compaction requested=false 2023-06-07 22:57:32,721 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1dac4ba4ee44ff9e131f583c120c6fce: 2023-06-07 22:57:32,902 WARN [Listener at localhost/41319] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:32,904 WARN [Listener at localhost/41319] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:32,905 INFO [Listener at localhost/41319] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:32,910 INFO [Listener at localhost/41319] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/java.io.tmpdir/Jetty_localhost_41881_datanode____.9ncscl/webapp 2023-06-07 22:57:32,910 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 to hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/oldWALs/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178635902 2023-06-07 22:57:33,001 INFO [Listener at localhost/41319] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41881 2023-06-07 22:57:33,010 WARN [Listener at localhost/45173] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:33,110 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4da064cd2fc7ff71: Processing first storage report for DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836 from datanode fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9 2023-06-07 22:57:33,110 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4da064cd2fc7ff71: from storage DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836 node DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-07 22:57:33,111 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4da064cd2fc7ff71: Processing first storage report for DS-4dc09c13-fb7b-4609-9540-9e7447e8d983 from datanode fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9 2023-06-07 22:57:33,111 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4da064cd2fc7ff71: from storage DS-4dc09c13-fb7b-4609-9540-9e7447e8d983 node DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:33,560 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1ced8d22] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:32835, datanodeUuid=65ce2998-e88d-4bde-beb2-137d8cf0e4af, infoPort=33949, infoSecurePort=0, ipcPort=41319, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741853_1035 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:33,966 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:33,966 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C45615%2C1686178623749:(num 1686178623887) roll requested 2023-06-07 22:57:33,970 WARN [Thread-707] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741859_1041 2023-06-07 22:57:33,970 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:33,971 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:33,971 WARN [Thread-707] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:33,977 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-07 22:57:33,977 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749/jenkins-hbase4.apache.org%2C45615%2C1686178623749.1686178623887 with entries=88, filesize=43.71 KB; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749/jenkins-hbase4.apache.org%2C45615%2C1686178623749.1686178653967 2023-06-07 22:57:33,977 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39435,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK], DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK]] 2023-06-07 22:57:33,977 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749/jenkins-hbase4.apache.org%2C45615%2C1686178623749.1686178623887 is not closed yet, will try archiving it next time 2023-06-07 22:57:33,977 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:33,978 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749/jenkins-hbase4.apache.org%2C45615%2C1686178623749.1686178623887; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:34,560 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5d21c955] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:32835, datanodeUuid=65ce2998-e88d-4bde-beb2-137d8cf0e4af, infoPort=33949, infoSecurePort=0, ipcPort=41319, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741850_1032 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:46,112 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@14d1b5ca] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741836_1012 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:47,111 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4f81c317] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741830_1006 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:47,111 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4130f7ee] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741828_1004 to 127.0.0.1:39577 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:49,112 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@36a54913] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741825_1001 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:51,439 WARN [Thread-723] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741861_1043 2023-06-07 22:57:51,439 WARN [Thread-723] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:51,448 INFO [Listener at localhost/45173] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178652493 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178671436 2023-06-07 22:57:51,448 DEBUG [Listener at localhost/45173] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK], DatanodeInfoWithStorage[127.0.0.1:39435,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]] 2023-06-07 22:57:51,448 DEBUG [Listener at localhost/45173] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942/jenkins-hbase4.apache.org%2C44917%2C1686178624942.1686178652493 is not closed yet, will try archiving it next time 2023-06-07 22:57:51,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44917] regionserver.HRegion(9158): Flush requested on 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:51,454 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1dac4ba4ee44ff9e131f583c120c6fce 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-06-07 22:57:51,455 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-07 22:57:51,463 WARN [Thread-730] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741863_1045 2023-06-07 22:57:51,463 WARN [Thread-730] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:51,471 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-07 22:57:51,471 INFO [Listener at localhost/45173] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-07 22:57:51,471 DEBUG [Listener at localhost/45173] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x07dff068 to 127.0.0.1:57222 2023-06-07 22:57:51,471 DEBUG [Listener at localhost/45173] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,471 DEBUG [Listener at localhost/45173] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-07 22:57:51,471 DEBUG [Listener at localhost/45173] util.JVMClusterUtil(257): Found active master hash=665702535, stopped=false 2023-06-07 22:57:51,471 INFO [Listener at localhost/45173] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:51,472 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/.tmp/info/d7e6b510a60246608456d53876e2f25e 2023-06-07 22:57:51,474 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:51,474 INFO [Listener at localhost/45173] procedure2.ProcedureExecutor(629): Stopping 2023-06-07 22:57:51,474 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:51,474 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:51,474 DEBUG [Listener at localhost/45173] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7918dd2e to 127.0.0.1:57222 2023-06-07 22:57:51,474 DEBUG [Listener at localhost/45173] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,474 INFO [Listener at localhost/45173] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46005,1686178623798' ***** 2023-06-07 22:57:51,474 INFO [Listener at localhost/45173] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 22:57:51,474 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:51,475 INFO [Listener at localhost/45173] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44917,1686178624942' ***** 2023-06-07 22:57:51,475 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:51,475 INFO [RS:0;jenkins-hbase4:46005] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 22:57:51,475 INFO [Listener at localhost/45173] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 22:57:51,476 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 22:57:51,476 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:51,476 INFO [RS:0;jenkins-hbase4:46005] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 22:57:51,476 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:51,476 INFO [RS:0;jenkins-hbase4:46005] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 22:57:51,476 INFO [RS:1;jenkins-hbase4:44917] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 22:57:51,477 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(3303): Received CLOSE for 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:51,477 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:51,477 DEBUG [RS:0;jenkins-hbase4:46005] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a944537 to 127.0.0.1:57222 2023-06-07 22:57:51,477 DEBUG [RS:0;jenkins-hbase4:46005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,477 INFO [RS:0;jenkins-hbase4:46005] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 22:57:51,477 INFO [RS:0;jenkins-hbase4:46005] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 22:57:51,478 INFO [RS:0;jenkins-hbase4:46005] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 22:57:51,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9b891e27dca706198bb1ad0b6b11f386, disabling compactions & flushes 2023-06-07 22:57:51,478 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 22:57:51,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. after waiting 0 ms 2023-06-07 22:57:51,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9b891e27dca706198bb1ad0b6b11f386 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-07 22:57:51,478 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-07 22:57:51,478 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 9b891e27dca706198bb1ad0b6b11f386=hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386.} 2023-06-07 22:57:51,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:57:51,478 WARN [RS:0;jenkins-hbase4:46005.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,479 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:57:51,479 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46005%2C1686178623798:(num 1686178624203) roll requested 2023-06-07 22:57:51,478 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1504): Waiting on 1588230740, 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:51,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9b891e27dca706198bb1ad0b6b11f386: 2023-06-07 22:57:51,479 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:57:51,479 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:57:51,479 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:57:51,479 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-06-07 22:57:51,480 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,46005,1686178623798: Unrecoverable exception while closing hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,480 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,480 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-07 22:57:51,483 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:57:51,483 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-07 22:57:51,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-07 22:57:51,488 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/.tmp/info/d7e6b510a60246608456d53876e2f25e as hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/d7e6b510a60246608456d53876e2f25e 2023-06-07 22:57:51,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-07 22:57:51,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-07 22:57:51,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-07 22:57:51,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 998768640, "init": 513802240, "max": 2051014656, "used": 374310568 }, "NonHeapMemoryUsage": { "committed": 133521408, "init": 2555904, "max": -1, "used": 131039576 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-07 22:57:51,495 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-07 22:57:51,495 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.1686178624203 with entries=3, filesize=600 B; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.1686178671479 2023-06-07 22:57:51,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK], DatanodeInfoWithStorage[127.0.0.1:39435,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]] 2023-06-07 22:57:51,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.1686178624203 is not closed yet, will try archiving it next time 2023-06-07 22:57:51,495 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta:.meta(num 1686178624353) roll requested 2023-06-07 22:57:51,495 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.1686178624203; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,501 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45615] master.MasterRpcServices(609): jenkins-hbase4.apache.org,46005,1686178623798 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,46005,1686178623798: Unrecoverable exception while closing hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,502 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/d7e6b510a60246608456d53876e2f25e, entries=8, sequenceid=25, filesize=13.2 K 2023-06-07 22:57:51,503 WARN [Thread-745] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741866_1048 2023-06-07 22:57:51,504 WARN [Thread-745] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:51,504 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 1dac4ba4ee44ff9e131f583c120c6fce in 50ms, sequenceid=25, compaction requested=false 2023-06-07 22:57:51,504 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1dac4ba4ee44ff9e131f583c120c6fce: 2023-06-07 22:57:51,504 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-06-07 22:57:51,505 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 22:57:51,505 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/d7e6b510a60246608456d53876e2f25e because midkey is the same as first or last row 2023-06-07 22:57:51,505 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 22:57:51,505 INFO [RS:1;jenkins-hbase4:44917] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 22:57:51,505 INFO [RS:1;jenkins-hbase4:44917] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 22:57:51,505 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(3303): Received CLOSE for 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:51,505 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:51,505 DEBUG [RS:1;jenkins-hbase4:44917] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x778e6d83 to 127.0.0.1:57222 2023-06-07 22:57:51,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1dac4ba4ee44ff9e131f583c120c6fce, disabling compactions & flushes 2023-06-07 22:57:51,505 DEBUG [RS:1;jenkins-hbase4:44917] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:51,505 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-06-07 22:57:51,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:51,506 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1478): Online Regions={1dac4ba4ee44ff9e131f583c120c6fce=TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce.} 2023-06-07 22:57:51,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. after waiting 0 ms 2023-06-07 22:57:51,506 DEBUG [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1504): Waiting on 1dac4ba4ee44ff9e131f583c120c6fce 2023-06-07 22:57:51,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:51,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1dac4ba4ee44ff9e131f583c120c6fce 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-07 22:57:51,511 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-07 22:57:51,511 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta.1686178624353.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta.1686178671495.meta 2023-06-07 22:57:51,511 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32835,DS-4ccfbf03-2124-41b1-a1b2-f01295488710,DISK], DatanodeInfoWithStorage[127.0.0.1:39435,DS-0aa8a6c0-c45e-4ff2-9c44-c19474b7c836,DISK]] 2023-06-07 22:57:51,511 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta.1686178624353.meta is not closed yet, will try archiving it next time 2023-06-07 22:57:51,511 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,512 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798/jenkins-hbase4.apache.org%2C46005%2C1686178623798.meta.1686178624353.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35915,DS-bcc3699b-5a64-41cf-aec8-10786f782306,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:57:51,516 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:37840 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741868_1050]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data4/current]'}, localName='127.0.0.1:39435', datanodeUuid='fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9', xmitsInProgress=0}:Exception transfering block BP-373150315-172.31.14.131-1686178623141:blk_1073741868_1050 to mirror 127.0.0.1:40179: java.net.ConnectException: Connection refused 2023-06-07 22:57:51,516 WARN [Thread-752] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741868_1050 2023-06-07 22:57:51,516 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2014200806_17 at /127.0.0.1:37840 [Receiving block BP-373150315-172.31.14.131-1686178623141:blk_1073741868_1050]] datanode.DataXceiver(323): 127.0.0.1:39435:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37840 dst: /127.0.0.1:39435 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:51,516 WARN [Thread-752] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:51,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/.tmp/info/18f9ca9864924239b3aeab847aca9b78 2023-06-07 22:57:51,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/.tmp/info/18f9ca9864924239b3aeab847aca9b78 as hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/18f9ca9864924239b3aeab847aca9b78 2023-06-07 22:57:51,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/info/18f9ca9864924239b3aeab847aca9b78, entries=9, sequenceid=37, filesize=14.2 K 2023-06-07 22:57:51,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 1dac4ba4ee44ff9e131f583c120c6fce in 31ms, sequenceid=37, compaction requested=true 2023-06-07 22:57:51,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/data/default/TestLogRolling-testLogRollOnDatanodeDeath/1dac4ba4ee44ff9e131f583c120c6fce/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-06-07 22:57:51,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:51,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1dac4ba4ee44ff9e131f583c120c6fce: 2023-06-07 22:57:51,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686178625040.1dac4ba4ee44ff9e131f583c120c6fce. 2023-06-07 22:57:51,679 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 22:57:51,679 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(3303): Received CLOSE for 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:51,679 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9b891e27dca706198bb1ad0b6b11f386, disabling compactions & flushes 2023-06-07 22:57:51,680 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:57:51,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,680 DEBUG [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1504): Waiting on 1588230740, 9b891e27dca706198bb1ad0b6b11f386 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. after waiting 0 ms 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9b891e27dca706198bb1ad0b6b11f386: 2023-06-07 22:57:51,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686178624413.9b891e27dca706198bb1ad0b6b11f386. 2023-06-07 22:57:51,706 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44917,1686178624942; all regions closed. 2023-06-07 22:57:51,706 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:51,715 DEBUG [RS:1;jenkins-hbase4:44917] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/oldWALs 2023-06-07 22:57:51,715 INFO [RS:1;jenkins-hbase4:44917] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C44917%2C1686178624942:(num 1686178671436) 2023-06-07 22:57:51,715 DEBUG [RS:1;jenkins-hbase4:44917] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,715 INFO [RS:1;jenkins-hbase4:44917] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:57:51,716 INFO [RS:1;jenkins-hbase4:44917] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-07 22:57:51,716 INFO [RS:1;jenkins-hbase4:44917] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 22:57:51,716 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:57:51,716 INFO [RS:1;jenkins-hbase4:44917] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 22:57:51,716 INFO [RS:1;jenkins-hbase4:44917] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 22:57:51,717 INFO [RS:1;jenkins-hbase4:44917] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44917 2023-06-07 22:57:51,720 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:51,720 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44917,1686178624942 2023-06-07 22:57:51,720 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:51,720 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:51,720 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:51,721 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44917,1686178624942] 2023-06-07 22:57:51,721 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44917,1686178624942; numProcessing=1 2023-06-07 22:57:51,724 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44917,1686178624942 already deleted, retry=false 2023-06-07 22:57:51,724 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44917,1686178624942 expired; onlineServers=1 2023-06-07 22:57:51,880 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-07 22:57:51,880 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46005,1686178623798; all regions closed. 2023-06-07 22:57:51,880 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:51,886 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/WALs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:51,890 DEBUG [RS:0;jenkins-hbase4:46005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,890 INFO [RS:0;jenkins-hbase4:46005] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:57:51,890 INFO [RS:0;jenkins-hbase4:46005] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-07 22:57:51,890 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:57:51,891 INFO [RS:0;jenkins-hbase4:46005] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46005 2023-06-07 22:57:51,893 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46005,1686178623798 2023-06-07 22:57:51,893 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:51,895 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46005,1686178623798] 2023-06-07 22:57:51,895 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46005,1686178623798; numProcessing=2 2023-06-07 22:57:51,896 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46005,1686178623798 already deleted, retry=false 2023-06-07 22:57:51,896 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46005,1686178623798 expired; onlineServers=0 2023-06-07 22:57:51,896 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,45615,1686178623749' ***** 2023-06-07 22:57:51,896 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-07 22:57:51,896 DEBUG [M:0;jenkins-hbase4:45615] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@239250aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:57:51,896 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:51,896 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45615,1686178623749; all regions closed. 2023-06-07 22:57:51,896 DEBUG [M:0;jenkins-hbase4:45615] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:57:51,896 DEBUG [M:0;jenkins-hbase4:45615] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-07 22:57:51,897 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-07 22:57:51,897 DEBUG [M:0;jenkins-hbase4:45615] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-07 22:57:51,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178623972] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178623972,5,FailOnTimeoutGroup] 2023-06-07 22:57:51,897 INFO [M:0;jenkins-hbase4:45615] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-07 22:57:51,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178623972] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178623972,5,FailOnTimeoutGroup] 2023-06-07 22:57:51,898 INFO [M:0;jenkins-hbase4:45615] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-07 22:57:51,898 INFO [M:0;jenkins-hbase4:45615] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-07 22:57:51,898 DEBUG [M:0;jenkins-hbase4:45615] master.HMaster(1512): Stopping service threads 2023-06-07 22:57:51,898 INFO [M:0;jenkins-hbase4:45615] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-07 22:57:51,899 ERROR [M:0;jenkins-hbase4:45615] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-07 22:57:51,899 INFO [M:0;jenkins-hbase4:45615] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-07 22:57:51,899 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-07 22:57:51,903 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-07 22:57:51,903 DEBUG [M:0;jenkins-hbase4:45615] zookeeper.ZKUtil(398): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-07 22:57:51,903 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:51,903 WARN [M:0;jenkins-hbase4:45615] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-07 22:57:51,903 INFO [M:0;jenkins-hbase4:45615] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-07 22:57:51,904 INFO [M:0;jenkins-hbase4:45615] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-07 22:57:51,904 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:57:51,904 DEBUG [M:0;jenkins-hbase4:45615] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:57:51,904 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:51,904 DEBUG [M:0;jenkins-hbase4:45615] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:51,904 DEBUG [M:0;jenkins-hbase4:45615] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:57:51,904 DEBUG [M:0;jenkins-hbase4:45615] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:51,904 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.08 KB heapSize=45.73 KB 2023-06-07 22:57:51,911 WARN [Thread-761] hdfs.DataStreamer(1658): Abandoning BP-373150315-172.31.14.131-1686178623141:blk_1073741870_1052 2023-06-07 22:57:51,912 WARN [Thread-761] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40179,DS-cbacd6fc-cfdf-4c44-8263-56100061fd5c,DISK] 2023-06-07 22:57:51,918 INFO [M:0;jenkins-hbase4:45615] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.08 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/728d14615fe64238a2d4d9fa3f975e0a 2023-06-07 22:57:51,925 DEBUG [M:0;jenkins-hbase4:45615] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/728d14615fe64238a2d4d9fa3f975e0a as hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/728d14615fe64238a2d4d9fa3f975e0a 2023-06-07 22:57:51,930 INFO [M:0;jenkins-hbase4:45615] regionserver.HStore(1080): Added hdfs://localhost:38639/user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/728d14615fe64238a2d4d9fa3f975e0a, entries=11, sequenceid=92, filesize=7.0 K 2023-06-07 22:57:51,931 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegion(2948): Finished flush of dataSize ~38.08 KB/38997, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=92, compaction requested=false 2023-06-07 22:57:51,932 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:51,932 DEBUG [M:0;jenkins-hbase4:45615] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:57:51,933 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2966b83a-522f-0e7d-24f2-b3871e0ea719/MasterData/WALs/jenkins-hbase4.apache.org,45615,1686178623749 2023-06-07 22:57:51,936 INFO [M:0;jenkins-hbase4:45615] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-07 22:57:51,936 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:57:51,936 INFO [M:0;jenkins-hbase4:45615] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45615 2023-06-07 22:57:51,938 DEBUG [M:0;jenkins-hbase4:45615] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45615,1686178623749 already deleted, retry=false 2023-06-07 22:57:51,974 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:51,974 INFO [RS:1;jenkins-hbase4:44917] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44917,1686178624942; zookeeper connection closed. 2023-06-07 22:57:51,975 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:44917-0x100a78273ee0005, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:51,975 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@65a3aad3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@65a3aad3 2023-06-07 22:57:52,068 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:57:52,075 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:52,075 INFO [M:0;jenkins-hbase4:45615] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45615,1686178623749; zookeeper connection closed. 2023-06-07 22:57:52,075 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): master:45615-0x100a78273ee0000, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:52,113 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@777dbdab] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741826_1002 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:52,113 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3134ab2d] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39435, datanodeUuid=fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9, infoPort=39599, infoSecurePort=0, ipcPort=45173, storageInfo=lv=-57;cid=testClusterID;nsid=278974127;c=1686178623141):Failed to transfer BP-373150315-172.31.14.131-1686178623141:blk_1073741837_1013 to 127.0.0.1:40179 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:57:52,175 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:52,175 DEBUG [Listener at localhost/34723-EventThread] zookeeper.ZKWatcher(600): regionserver:46005-0x100a78273ee0001, quorum=127.0.0.1:57222, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:57:52,175 INFO [RS:0;jenkins-hbase4:46005] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46005,1686178623798; zookeeper connection closed. 2023-06-07 22:57:52,176 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@cf9bc07] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@cf9bc07 2023-06-07 22:57:52,176 INFO [Listener at localhost/45173] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-07 22:57:52,176 WARN [Listener at localhost/45173] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:52,180 INFO [Listener at localhost/45173] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:52,283 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:52,283 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-373150315-172.31.14.131-1686178623141 (Datanode Uuid fd25d6af-1a06-43b6-abce-bf6a4ae8b8d9) service to localhost/127.0.0.1:38639 2023-06-07 22:57:52,284 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data3/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:52,284 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data4/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:52,285 WARN [Listener at localhost/45173] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:57:52,288 INFO [Listener at localhost/45173] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:52,392 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:57:52,392 WARN [BP-373150315-172.31.14.131-1686178623141 heartbeating to localhost/127.0.0.1:38639] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-373150315-172.31.14.131-1686178623141 (Datanode Uuid 65ce2998-e88d-4bde-beb2-137d8cf0e4af) service to localhost/127.0.0.1:38639 2023-06-07 22:57:52,392 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data9/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:52,393 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/cluster_2a90c08d-3f51-7c6e-af6f-6f2ff8639c2a/dfs/data/data10/current/BP-373150315-172.31.14.131-1686178623141] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:57:52,404 INFO [Listener at localhost/45173] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:57:52,519 INFO [Listener at localhost/45173] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-07 22:57:52,548 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-07 22:57:52,559 INFO [Listener at localhost/45173] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 52) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:38639 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:38639 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:38639 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:38639 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:38639 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45173 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:38639 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:38639 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=471 (was 439) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=51 (was 81), ProcessCount=170 (was 170), AvailableMemoryMB=822 (was 1034) 2023-06-07 22:57:52,568 INFO [Listener at localhost/45173] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=471, MaxFileDescriptor=60000, SystemLoadAverage=51, ProcessCount=170, AvailableMemoryMB=822 2023-06-07 22:57:52,569 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-07 22:57:52,569 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/hadoop.log.dir so I do NOT create it in target/test-data/1034f849-6064-889a-5706-f9e162b675e8 2023-06-07 22:57:52,569 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/73555ee1-5927-90b1-3e6e-77ab1dd86780/hadoop.tmp.dir so I do NOT create it in target/test-data/1034f849-6064-889a-5706-f9e162b675e8 2023-06-07 22:57:52,569 INFO [Listener at localhost/45173] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da, deleteOnExit=true 2023-06-07 22:57:52,569 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-07 22:57:52,569 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/test.cache.data in system properties and HBase conf 2023-06-07 22:57:52,570 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/hadoop.tmp.dir in system properties and HBase conf 2023-06-07 22:57:52,570 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/hadoop.log.dir in system properties and HBase conf 2023-06-07 22:57:52,570 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-07 22:57:52,570 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-07 22:57:52,570 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-07 22:57:52,571 DEBUG [Listener at localhost/45173] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-07 22:57:52,571 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:57:52,571 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:57:52,571 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-07 22:57:52,572 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:57:52,572 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-07 22:57:52,572 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-07 22:57:52,572 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:57:52,572 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:57:52,572 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-07 22:57:52,573 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/nfs.dump.dir in system properties and HBase conf 2023-06-07 22:57:52,573 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir in system properties and HBase conf 2023-06-07 22:57:52,573 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:57:52,573 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-07 22:57:52,573 INFO [Listener at localhost/45173] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-07 22:57:52,575 WARN [Listener at localhost/45173] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:57:52,578 WARN [Listener at localhost/45173] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:57:52,578 WARN [Listener at localhost/45173] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:57:52,624 WARN [Listener at localhost/45173] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:52,626 INFO [Listener at localhost/45173] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:52,630 INFO [Listener at localhost/45173] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_34419_hdfs____ly6ccl/webapp 2023-06-07 22:57:52,722 INFO [Listener at localhost/45173] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34419 2023-06-07 22:57:52,724 WARN [Listener at localhost/45173] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:57:52,727 WARN [Listener at localhost/45173] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:57:52,727 WARN [Listener at localhost/45173] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:57:52,771 WARN [Listener at localhost/41673] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:52,783 WARN [Listener at localhost/41673] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:52,786 WARN [Listener at localhost/41673] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:52,787 INFO [Listener at localhost/41673] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:52,791 INFO [Listener at localhost/41673] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_37681_datanode____9i9y5u/webapp 2023-06-07 22:57:52,883 INFO [Listener at localhost/41673] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37681 2023-06-07 22:57:52,890 WARN [Listener at localhost/40367] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:52,909 WARN [Listener at localhost/40367] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:57:52,911 WARN [Listener at localhost/40367] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:57:52,912 INFO [Listener at localhost/40367] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:57:52,916 INFO [Listener at localhost/40367] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_41283_datanode____.qvudc9/webapp 2023-06-07 22:57:52,996 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20bc8ea0269d6ca0: Processing first storage report for DS-b0864fed-8f4a-46f7-b317-d68038e3214e from datanode 23e681f6-171f-4497-b9de-1ca8d41eb2f9 2023-06-07 22:57:52,996 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20bc8ea0269d6ca0: from storage DS-b0864fed-8f4a-46f7-b317-d68038e3214e node DatanodeRegistration(127.0.0.1:33979, datanodeUuid=23e681f6-171f-4497-b9de-1ca8d41eb2f9, infoPort=36249, infoSecurePort=0, ipcPort=40367, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:52,996 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20bc8ea0269d6ca0: Processing first storage report for DS-7c595789-eec4-408a-82be-7774d3a1b3a9 from datanode 23e681f6-171f-4497-b9de-1ca8d41eb2f9 2023-06-07 22:57:52,996 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20bc8ea0269d6ca0: from storage DS-7c595789-eec4-408a-82be-7774d3a1b3a9 node DatanodeRegistration(127.0.0.1:33979, datanodeUuid=23e681f6-171f-4497-b9de-1ca8d41eb2f9, infoPort=36249, infoSecurePort=0, ipcPort=40367, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:53,012 INFO [Listener at localhost/40367] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41283 2023-06-07 22:57:53,013 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:57:53,019 WARN [Listener at localhost/45411] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:57:53,100 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x683546799b9c4eb2: Processing first storage report for DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41 from datanode 4e811068-9e92-4937-b0c3-94178167fa7b 2023-06-07 22:57:53,100 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x683546799b9c4eb2: from storage DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41 node DatanodeRegistration(127.0.0.1:38633, datanodeUuid=4e811068-9e92-4937-b0c3-94178167fa7b, infoPort=36305, infoSecurePort=0, ipcPort=45411, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:53,100 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x683546799b9c4eb2: Processing first storage report for DS-30aed949-1f0c-449e-8b3f-e8787fd2d7f8 from datanode 4e811068-9e92-4937-b0c3-94178167fa7b 2023-06-07 22:57:53,100 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x683546799b9c4eb2: from storage DS-30aed949-1f0c-449e-8b3f-e8787fd2d7f8 node DatanodeRegistration(127.0.0.1:38633, datanodeUuid=4e811068-9e92-4937-b0c3-94178167fa7b, infoPort=36305, infoSecurePort=0, ipcPort=45411, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:57:53,126 DEBUG [Listener at localhost/45411] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8 2023-06-07 22:57:53,129 INFO [Listener at localhost/45411] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/zookeeper_0, clientPort=56337, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-07 22:57:53,130 INFO [Listener at localhost/45411] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56337 2023-06-07 22:57:53,130 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,131 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,143 INFO [Listener at localhost/45411] util.FSUtils(471): Created version file at hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984 with version=8 2023-06-07 22:57:53,143 INFO [Listener at localhost/45411] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase-staging 2023-06-07 22:57:53,145 INFO [Listener at localhost/45411] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:57:53,145 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:53,145 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:53,146 INFO [Listener at localhost/45411] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:57:53,146 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:53,146 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:57:53,146 INFO [Listener at localhost/45411] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:57:53,147 INFO [Listener at localhost/45411] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35059 2023-06-07 22:57:53,148 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,149 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,150 INFO [Listener at localhost/45411] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35059 connecting to ZooKeeper ensemble=127.0.0.1:56337 2023-06-07 22:57:53,157 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:350590x0, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:57:53,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35059-0x100a78334eb0000 connected 2023-06-07 22:57:53,175 DEBUG [Listener at localhost/45411] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:57:53,175 DEBUG [Listener at localhost/45411] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:53,176 DEBUG [Listener at localhost/45411] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:57:53,176 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35059 2023-06-07 22:57:53,176 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35059 2023-06-07 22:57:53,176 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35059 2023-06-07 22:57:53,177 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35059 2023-06-07 22:57:53,177 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35059 2023-06-07 22:57:53,177 INFO [Listener at localhost/45411] master.HMaster(444): hbase.rootdir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984, hbase.cluster.distributed=false 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:57:53,190 INFO [Listener at localhost/45411] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:57:53,191 INFO [Listener at localhost/45411] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33879 2023-06-07 22:57:53,192 INFO [Listener at localhost/45411] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 22:57:53,193 DEBUG [Listener at localhost/45411] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 22:57:53,193 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,194 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,196 INFO [Listener at localhost/45411] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33879 connecting to ZooKeeper ensemble=127.0.0.1:56337 2023-06-07 22:57:53,198 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:338790x0, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:57:53,199 DEBUG [Listener at localhost/45411] zookeeper.ZKUtil(164): regionserver:338790x0, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:57:53,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33879-0x100a78334eb0001 connected 2023-06-07 22:57:53,200 DEBUG [Listener at localhost/45411] zookeeper.ZKUtil(164): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:57:53,200 DEBUG [Listener at localhost/45411] zookeeper.ZKUtil(164): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:57:53,201 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33879 2023-06-07 22:57:53,201 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33879 2023-06-07 22:57:53,201 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33879 2023-06-07 22:57:53,202 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33879 2023-06-07 22:57:53,202 DEBUG [Listener at localhost/45411] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33879 2023-06-07 22:57:53,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,206 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:57:53,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,208 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:57:53,208 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:57:53,208 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,209 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:57:53,210 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:57:53,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35059,1686178673144 from backup master directory 2023-06-07 22:57:53,211 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,211 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:57:53,211 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:57:53,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/hbase.id with ID: 71065c13-ad06-4bf3-8856-855ec41e4684 2023-06-07 22:57:53,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:53,237 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,245 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5fc64d92 to 127.0.0.1:56337 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:53,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50bc7282, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:53,248 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:53,249 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-07 22:57:53,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:53,252 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store-tmp 2023-06-07 22:57:53,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:53,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:57:53,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:53,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:53,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:57:53,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:53,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:57:53,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:57:53,262 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35059%2C1686178673144, suffix=, logDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144, archiveDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/oldWALs, maxLogs=10 2023-06-07 22:57:53,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144/jenkins-hbase4.apache.org%2C35059%2C1686178673144.1686178673265 2023-06-07 22:57:53,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] 2023-06-07 22:57:53,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:53,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:53,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:53,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:53,274 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:53,276 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-07 22:57:53,276 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-07 22:57:53,277 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,278 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:53,278 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:53,281 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:57:53,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:53,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=730431, jitterRate=-0.07121008634567261}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:57:53,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:57:53,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-07 22:57:53,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-07 22:57:53,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-07 22:57:53,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-07 22:57:53,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-07 22:57:53,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-07 22:57:53,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-07 22:57:53,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-07 22:57:53,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-07 22:57:53,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-07 22:57:53,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-07 22:57:53,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-07 22:57:53,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-07 22:57:53,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-07 22:57:53,303 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-07 22:57:53,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-07 22:57:53,305 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-07 22:57:53,306 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:53,306 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:57:53,306 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35059,1686178673144, sessionid=0x100a78334eb0000, setting cluster-up flag (Was=false) 2023-06-07 22:57:53,310 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-07 22:57:53,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,319 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-07 22:57:53,324 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:53,324 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.hbase-snapshot/.tmp 2023-06-07 22:57:53,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-07 22:57:53,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:57:53,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686178703331 2023-06-07 22:57:53,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-07 22:57:53,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-07 22:57:53,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-07 22:57:53,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-07 22:57:53,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-07 22:57:53,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-07 22:57:53,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,333 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:57:53,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-07 22:57:53,334 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-07 22:57:53,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-07 22:57:53,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-07 22:57:53,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-07 22:57:53,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-07 22:57:53,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178673334,5,FailOnTimeoutGroup] 2023-06-07 22:57:53,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178673335,5,FailOnTimeoutGroup] 2023-06-07 22:57:53,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-07 22:57:53,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,335 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:53,343 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:53,344 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:53,344 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984 2023-06-07 22:57:53,352 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:53,354 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:57:53,356 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/info 2023-06-07 22:57:53,356 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:57:53,357 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,357 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:57:53,358 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:57:53,358 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:57:53,359 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,359 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:57:53,360 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/table 2023-06-07 22:57:53,360 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:57:53,361 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,361 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740 2023-06-07 22:57:53,362 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740 2023-06-07 22:57:53,364 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:57:53,365 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:57:53,369 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:53,369 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=805123, jitterRate=0.023767590522766113}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:57:53,369 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:57:53,369 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:57:53,370 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:57:53,370 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:57:53,370 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:57:53,370 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:57:53,370 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:57:53,370 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:57:53,371 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:57:53,371 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-07 22:57:53,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-07 22:57:53,373 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-07 22:57:53,374 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-07 22:57:53,403 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(951): ClusterId : 71065c13-ad06-4bf3-8856-855ec41e4684 2023-06-07 22:57:53,405 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 22:57:53,408 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 22:57:53,408 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 22:57:53,410 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 22:57:53,411 DEBUG [RS:0;jenkins-hbase4:33879] zookeeper.ReadOnlyZKClient(139): Connect 0x1737f6e4 to 127.0.0.1:56337 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:53,415 DEBUG [RS:0;jenkins-hbase4:33879] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@130860da, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:53,415 DEBUG [RS:0;jenkins-hbase4:33879] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@560afcdf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:57:53,424 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33879 2023-06-07 22:57:53,424 INFO [RS:0;jenkins-hbase4:33879] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 22:57:53,424 INFO [RS:0;jenkins-hbase4:33879] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 22:57:53,424 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 22:57:53,424 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35059,1686178673144 with isa=jenkins-hbase4.apache.org/172.31.14.131:33879, startcode=1686178673189 2023-06-07 22:57:53,425 DEBUG [RS:0;jenkins-hbase4:33879] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 22:57:53,428 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48165, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 22:57:53,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,429 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984 2023-06-07 22:57:53,429 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41673 2023-06-07 22:57:53,429 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 22:57:53,431 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:57:53,431 DEBUG [RS:0;jenkins-hbase4:33879] zookeeper.ZKUtil(162): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,431 WARN [RS:0;jenkins-hbase4:33879] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:57:53,432 INFO [RS:0;jenkins-hbase4:33879] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:53,432 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1946): logDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,432 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33879,1686178673189] 2023-06-07 22:57:53,435 DEBUG [RS:0;jenkins-hbase4:33879] zookeeper.ZKUtil(162): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,436 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 22:57:53,437 INFO [RS:0;jenkins-hbase4:33879] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 22:57:53,438 INFO [RS:0;jenkins-hbase4:33879] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 22:57:53,438 INFO [RS:0;jenkins-hbase4:33879] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:57:53,438 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,438 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 22:57:53,440 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,440 DEBUG [RS:0;jenkins-hbase4:33879] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:57:53,441 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,441 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,441 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,452 INFO [RS:0;jenkins-hbase4:33879] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 22:57:53,453 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33879,1686178673189-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,464 INFO [RS:0;jenkins-hbase4:33879] regionserver.Replication(203): jenkins-hbase4.apache.org,33879,1686178673189 started 2023-06-07 22:57:53,464 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33879,1686178673189, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33879, sessionid=0x100a78334eb0001 2023-06-07 22:57:53,464 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 22:57:53,464 DEBUG [RS:0;jenkins-hbase4:33879] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,464 DEBUG [RS:0;jenkins-hbase4:33879] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33879,1686178673189' 2023-06-07 22:57:53,464 DEBUG [RS:0;jenkins-hbase4:33879] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:57:53,465 DEBUG [RS:0;jenkins-hbase4:33879] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:57:53,465 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 22:57:53,465 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 22:57:53,465 DEBUG [RS:0;jenkins-hbase4:33879] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,465 DEBUG [RS:0;jenkins-hbase4:33879] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33879,1686178673189' 2023-06-07 22:57:53,465 DEBUG [RS:0;jenkins-hbase4:33879] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 22:57:53,466 DEBUG [RS:0;jenkins-hbase4:33879] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 22:57:53,466 DEBUG [RS:0;jenkins-hbase4:33879] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 22:57:53,466 INFO [RS:0;jenkins-hbase4:33879] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 22:57:53,466 INFO [RS:0;jenkins-hbase4:33879] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 22:57:53,524 DEBUG [jenkins-hbase4:35059] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-07 22:57:53,525 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33879,1686178673189, state=OPENING 2023-06-07 22:57:53,528 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-07 22:57:53,529 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:53,529 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33879,1686178673189}] 2023-06-07 22:57:53,529 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:57:53,568 INFO [RS:0;jenkins-hbase4:33879] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33879%2C1686178673189, suffix=, logDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189, archiveDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/oldWALs, maxLogs=32 2023-06-07 22:57:53,576 INFO [RS:0;jenkins-hbase4:33879] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 2023-06-07 22:57:53,576 DEBUG [RS:0;jenkins-hbase4:33879] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK], DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] 2023-06-07 22:57:53,684 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,684 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 22:57:53,687 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 22:57:53,691 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-07 22:57:53,691 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:57:53,693 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta, suffix=.meta, logDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189, archiveDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/oldWALs, maxLogs=32 2023-06-07 22:57:53,701 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta.1686178673694.meta 2023-06-07 22:57:53,701 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] 2023-06-07 22:57:53,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:53,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-07 22:57:53,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-07 22:57:53,702 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-07 22:57:53,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-07 22:57:53,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:53,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-07 22:57:53,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-07 22:57:53,707 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:57:53,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/info 2023-06-07 22:57:53,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/info 2023-06-07 22:57:53,708 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:57:53,709 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,709 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:57:53,710 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:57:53,710 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:57:53,710 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:57:53,710 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,711 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:57:53,711 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/table 2023-06-07 22:57:53,712 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740/table 2023-06-07 22:57:53,712 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:57:53,712 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:53,713 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740 2023-06-07 22:57:53,714 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/meta/1588230740 2023-06-07 22:57:53,717 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:57:53,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:57:53,719 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=844828, jitterRate=0.07425516843795776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:57:53,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:57:53,721 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686178673684 2023-06-07 22:57:53,725 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-07 22:57:53,725 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-07 22:57:53,726 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33879,1686178673189, state=OPEN 2023-06-07 22:57:53,728 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-07 22:57:53,728 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:57:53,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-07 22:57:53,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33879,1686178673189 in 199 msec 2023-06-07 22:57:53,733 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-07 22:57:53,733 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 359 msec 2023-06-07 22:57:53,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 408 msec 2023-06-07 22:57:53,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686178673735, completionTime=-1 2023-06-07 22:57:53,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-07 22:57:53,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-07 22:57:53,738 DEBUG [hconnection-0x52fe3f48-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:57:53,740 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36368, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:57:53,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-07 22:57:53,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686178733741 2023-06-07 22:57:53,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686178793741 2023-06-07 22:57:53,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35059,1686178673144-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35059,1686178673144-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35059,1686178673144-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35059, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-07 22:57:53,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:53,750 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-07 22:57:53,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-07 22:57:53,752 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:57:53,753 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:57:53,755 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:53,756 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e empty. 2023-06-07 22:57:53,756 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:53,756 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-07 22:57:53,768 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:53,770 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 81f39faec7ee3fc5b5a5b09ae5ec5e8e, NAME => 'hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp 2023-06-07 22:57:53,779 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:53,779 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 81f39faec7ee3fc5b5a5b09ae5ec5e8e, disabling compactions & flushes 2023-06-07 22:57:53,779 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:53,779 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:53,779 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. after waiting 0 ms 2023-06-07 22:57:53,779 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:53,779 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:53,779 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 81f39faec7ee3fc5b5a5b09ae5ec5e8e: 2023-06-07 22:57:53,781 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:57:53,782 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178673782"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178673782"}]},"ts":"1686178673782"} 2023-06-07 22:57:53,785 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:57:53,786 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:57:53,786 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178673786"}]},"ts":"1686178673786"} 2023-06-07 22:57:53,787 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-07 22:57:53,793 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=81f39faec7ee3fc5b5a5b09ae5ec5e8e, ASSIGN}] 2023-06-07 22:57:53,795 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=81f39faec7ee3fc5b5a5b09ae5ec5e8e, ASSIGN 2023-06-07 22:57:53,796 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=81f39faec7ee3fc5b5a5b09ae5ec5e8e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33879,1686178673189; forceNewPlan=false, retain=false 2023-06-07 22:57:53,947 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=81f39faec7ee3fc5b5a5b09ae5ec5e8e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:53,947 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178673947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178673947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178673947"}]},"ts":"1686178673947"} 2023-06-07 22:57:53,950 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 81f39faec7ee3fc5b5a5b09ae5ec5e8e, server=jenkins-hbase4.apache.org,33879,1686178673189}] 2023-06-07 22:57:54,106 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:54,106 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 81f39faec7ee3fc5b5a5b09ae5ec5e8e, NAME => 'hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:54,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:54,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,108 INFO [StoreOpener-81f39faec7ee3fc5b5a5b09ae5ec5e8e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,110 DEBUG [StoreOpener-81f39faec7ee3fc5b5a5b09ae5ec5e8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e/info 2023-06-07 22:57:54,110 DEBUG [StoreOpener-81f39faec7ee3fc5b5a5b09ae5ec5e8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e/info 2023-06-07 22:57:54,110 INFO [StoreOpener-81f39faec7ee3fc5b5a5b09ae5ec5e8e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 81f39faec7ee3fc5b5a5b09ae5ec5e8e columnFamilyName info 2023-06-07 22:57:54,111 INFO [StoreOpener-81f39faec7ee3fc5b5a5b09ae5ec5e8e-1] regionserver.HStore(310): Store=81f39faec7ee3fc5b5a5b09ae5ec5e8e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:54,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,114 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:57:54,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/hbase/namespace/81f39faec7ee3fc5b5a5b09ae5ec5e8e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:54,117 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 81f39faec7ee3fc5b5a5b09ae5ec5e8e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=857212, jitterRate=0.09000208973884583}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:57:54,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 81f39faec7ee3fc5b5a5b09ae5ec5e8e: 2023-06-07 22:57:54,119 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e., pid=6, masterSystemTime=1686178674102 2023-06-07 22:57:54,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:54,122 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:57:54,123 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=81f39faec7ee3fc5b5a5b09ae5ec5e8e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:54,123 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178674122"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178674122"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178674122"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178674122"}]},"ts":"1686178674122"} 2023-06-07 22:57:54,127 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-07 22:57:54,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 81f39faec7ee3fc5b5a5b09ae5ec5e8e, server=jenkins-hbase4.apache.org,33879,1686178673189 in 175 msec 2023-06-07 22:57:54,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-07 22:57:54,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=81f39faec7ee3fc5b5a5b09ae5ec5e8e, ASSIGN in 334 msec 2023-06-07 22:57:54,131 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:57:54,131 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178674131"}]},"ts":"1686178674131"} 2023-06-07 22:57:54,133 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-07 22:57:54,136 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:57:54,138 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 387 msec 2023-06-07 22:57:54,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-07 22:57:54,153 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:57:54,153 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:54,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-07 22:57:54,166 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:57:54,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-07 22:57:54,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-07 22:57:54,193 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:57:54,197 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-06-07 22:57:54,205 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-07 22:57:54,208 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-07 22:57:54,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.997sec 2023-06-07 22:57:54,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-07 22:57:54,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-07 22:57:54,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-07 22:57:54,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35059,1686178673144-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-07 22:57:54,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35059,1686178673144-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-07 22:57:54,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-07 22:57:54,304 DEBUG [Listener at localhost/45411] zookeeper.ReadOnlyZKClient(139): Connect 0x2fbdd531 to 127.0.0.1:56337 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:57:54,309 DEBUG [Listener at localhost/45411] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56e35e77, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:57:54,310 DEBUG [hconnection-0x2d87376-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:57:54,312 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36374, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:57:54,313 INFO [Listener at localhost/45411] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:57:54,313 INFO [Listener at localhost/45411] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:57:54,318 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-07 22:57:54,318 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:57:54,318 INFO [Listener at localhost/45411] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-07 22:57:54,319 INFO [Listener at localhost/45411] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-07 22:57:54,319 INFO [Listener at localhost/45411] wal.TestLogRolling(432): Replication=2 2023-06-07 22:57:54,320 DEBUG [Listener at localhost/45411] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-07 22:57:54,322 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41194, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-07 22:57:54,324 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-07 22:57:54,324 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-07 22:57:54,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:57:54,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-07 22:57:54,328 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:57:54,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-07 22:57:54,329 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:57:54,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:57:54,330 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,331 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7 empty. 2023-06-07 22:57:54,331 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,331 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-07 22:57:54,341 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-07 22:57:54,342 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => ab4af849ba5b414b74f0c3df9f26e9f7, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/.tmp 2023-06-07 22:57:54,349 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:54,349 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing ab4af849ba5b414b74f0c3df9f26e9f7, disabling compactions & flushes 2023-06-07 22:57:54,349 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,349 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,349 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. after waiting 0 ms 2023-06-07 22:57:54,349 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,349 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,349 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for ab4af849ba5b414b74f0c3df9f26e9f7: 2023-06-07 22:57:54,352 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:57:54,353 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686178674353"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178674353"}]},"ts":"1686178674353"} 2023-06-07 22:57:54,354 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:57:54,355 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:57:54,356 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178674355"}]},"ts":"1686178674355"} 2023-06-07 22:57:54,357 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-07 22:57:54,360 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=ab4af849ba5b414b74f0c3df9f26e9f7, ASSIGN}] 2023-06-07 22:57:54,362 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=ab4af849ba5b414b74f0c3df9f26e9f7, ASSIGN 2023-06-07 22:57:54,363 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=ab4af849ba5b414b74f0c3df9f26e9f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33879,1686178673189; forceNewPlan=false, retain=false 2023-06-07 22:57:54,514 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=ab4af849ba5b414b74f0c3df9f26e9f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:54,514 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686178674514"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178674514"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178674514"}]},"ts":"1686178674514"} 2023-06-07 22:57:54,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure ab4af849ba5b414b74f0c3df9f26e9f7, server=jenkins-hbase4.apache.org,33879,1686178673189}] 2023-06-07 22:57:54,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab4af849ba5b414b74f0c3df9f26e9f7, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:57:54,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:57:54,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,675 INFO [StoreOpener-ab4af849ba5b414b74f0c3df9f26e9f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,676 DEBUG [StoreOpener-ab4af849ba5b414b74f0c3df9f26e9f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7/info 2023-06-07 22:57:54,676 DEBUG [StoreOpener-ab4af849ba5b414b74f0c3df9f26e9f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7/info 2023-06-07 22:57:54,677 INFO [StoreOpener-ab4af849ba5b414b74f0c3df9f26e9f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab4af849ba5b414b74f0c3df9f26e9f7 columnFamilyName info 2023-06-07 22:57:54,677 INFO [StoreOpener-ab4af849ba5b414b74f0c3df9f26e9f7-1] regionserver.HStore(310): Store=ab4af849ba5b414b74f0c3df9f26e9f7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:57:54,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:57:54,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/data/default/TestLogRolling-testLogRollOnPipelineRestart/ab4af849ba5b414b74f0c3df9f26e9f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:57:54,685 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab4af849ba5b414b74f0c3df9f26e9f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=692424, jitterRate=-0.11953777074813843}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:57:54,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab4af849ba5b414b74f0c3df9f26e9f7: 2023-06-07 22:57:54,686 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7., pid=11, masterSystemTime=1686178674669 2023-06-07 22:57:54,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:57:54,689 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=ab4af849ba5b414b74f0c3df9f26e9f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:57:54,689 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686178674689"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178674689"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178674689"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178674689"}]},"ts":"1686178674689"} 2023-06-07 22:57:54,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-07 22:57:54,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure ab4af849ba5b414b74f0c3df9f26e9f7, server=jenkins-hbase4.apache.org,33879,1686178673189 in 175 msec 2023-06-07 22:57:54,695 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-07 22:57:54,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=ab4af849ba5b414b74f0c3df9f26e9f7, ASSIGN in 333 msec 2023-06-07 22:57:54,696 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:57:54,696 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178674696"}]},"ts":"1686178674696"} 2023-06-07 22:57:54,698 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-07 22:57:54,700 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:57:54,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 376 msec 2023-06-07 22:57:57,083 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-07 22:57:59,437 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-07 22:58:04,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:58:04,330 INFO [Listener at localhost/45411] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-07 22:58:04,333 DEBUG [Listener at localhost/45411] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-07 22:58:04,333 DEBUG [Listener at localhost/45411] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:06,340 INFO [Listener at localhost/45411] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 2023-06-07 22:58:06,340 WARN [Listener at localhost/45411] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:58:06,342 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:58:06,394 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-07 22:58:06,393 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-07 22:58:06,395 WARN [DataStreamer for file /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144/jenkins-hbase4.apache.org%2C35059%2C1686178673144.1686178673265 block BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]) is bad. 2023-06-07 22:58:06,395 WARN [DataStreamer for file /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 block BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK], DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]) is bad. 2023-06-07 22:58:06,395 WARN [DataStreamer for file /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta.1686178673694.meta block BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38633,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]) is bad. 2023-06-07 22:58:06,395 WARN [PacketResponder: BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38633]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,395 WARN [PacketResponder: BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38633]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,397 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:42482 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33979:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42482 dst: /127.0.0.1:33979 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,398 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-264016589_17 at /127.0.0.1:42440 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33979:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42440 dst: /127.0.0.1:33979 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,405 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:42470 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33979:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42470 dst: /127.0.0.1:33979 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33979 remote=/127.0.0.1:42470]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,406 WARN [PacketResponder: BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33979]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,407 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:44772 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:38633:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44772 dst: /127.0.0.1:38633 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,407 INFO [Listener at localhost/45411] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:06,510 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-264016589_17 at /127.0.0.1:44756 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:38633:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44756 dst: /127.0.0.1:38633 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,511 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:44778 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:38633:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44778 dst: /127.0.0.1:38633 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,511 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:58:06,513 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1555267459-172.31.14.131-1686178672581 (Datanode Uuid 4e811068-9e92-4937-b0c3-94178167fa7b) service to localhost/127.0.0.1:41673 2023-06-07 22:58:06,513 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data3/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:06,513 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data4/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:06,520 WARN [Listener at localhost/45411] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:58:06,523 WARN [Listener at localhost/45411] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:06,524 INFO [Listener at localhost/45411] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:06,528 INFO [Listener at localhost/45411] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_34155_datanode____.d49q4p/webapp 2023-06-07 22:58:06,621 INFO [Listener at localhost/45411] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34155 2023-06-07 22:58:06,629 WARN [Listener at localhost/45915] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:06,633 WARN [Listener at localhost/45915] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:58:06,633 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:58:06,633 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:58:06,633 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:58:06,640 INFO [Listener at localhost/45915] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:06,696 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa6fe1f47f691d875: Processing first storage report for DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41 from datanode 4e811068-9e92-4937-b0c3-94178167fa7b 2023-06-07 22:58:06,696 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa6fe1f47f691d875: from storage DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41 node DatanodeRegistration(127.0.0.1:42907, datanodeUuid=4e811068-9e92-4937-b0c3-94178167fa7b, infoPort=45509, infoSecurePort=0, ipcPort=45915, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-07 22:58:06,697 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa6fe1f47f691d875: Processing first storage report for DS-30aed949-1f0c-449e-8b3f-e8787fd2d7f8 from datanode 4e811068-9e92-4937-b0c3-94178167fa7b 2023-06-07 22:58:06,697 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa6fe1f47f691d875: from storage DS-30aed949-1f0c-449e-8b3f-e8787fd2d7f8 node DatanodeRegistration(127.0.0.1:42907, datanodeUuid=4e811068-9e92-4937-b0c3-94178167fa7b, infoPort=45509, infoSecurePort=0, ipcPort=45915, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:06,742 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-264016589_17 at /127.0.0.1:52346 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33979:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52346 dst: /127.0.0.1:33979 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,743 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:58:06,743 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:52330 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33979:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52330 dst: /127.0.0.1:33979 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,742 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:52358 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33979:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52358 dst: /127.0.0.1:33979 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:06,744 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1555267459-172.31.14.131-1686178672581 (Datanode Uuid 23e681f6-171f-4497-b9de-1ca8d41eb2f9) service to localhost/127.0.0.1:41673 2023-06-07 22:58:06,746 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:06,747 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data2/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:06,752 WARN [Listener at localhost/45915] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:58:06,754 WARN [Listener at localhost/45915] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:06,755 INFO [Listener at localhost/45915] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:06,760 INFO [Listener at localhost/45915] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_34103_datanode____.wn1412/webapp 2023-06-07 22:58:06,852 INFO [Listener at localhost/45915] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34103 2023-06-07 22:58:06,859 WARN [Listener at localhost/38233] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:06,926 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3351f40e21e07e9: Processing first storage report for DS-b0864fed-8f4a-46f7-b317-d68038e3214e from datanode 23e681f6-171f-4497-b9de-1ca8d41eb2f9 2023-06-07 22:58:06,927 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3351f40e21e07e9: from storage DS-b0864fed-8f4a-46f7-b317-d68038e3214e node DatanodeRegistration(127.0.0.1:34899, datanodeUuid=23e681f6-171f-4497-b9de-1ca8d41eb2f9, infoPort=45331, infoSecurePort=0, ipcPort=38233, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-07 22:58:06,927 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3351f40e21e07e9: Processing first storage report for DS-7c595789-eec4-408a-82be-7774d3a1b3a9 from datanode 23e681f6-171f-4497-b9de-1ca8d41eb2f9 2023-06-07 22:58:06,927 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3351f40e21e07e9: from storage DS-7c595789-eec4-408a-82be-7774d3a1b3a9 node DatanodeRegistration(127.0.0.1:34899, datanodeUuid=23e681f6-171f-4497-b9de-1ca8d41eb2f9, infoPort=45331, infoSecurePort=0, ipcPort=38233, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:07,863 INFO [Listener at localhost/38233] wal.TestLogRolling(481): Data Nodes restarted 2023-06-07 22:58:07,865 INFO [Listener at localhost/38233] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-07 22:58:07,866 WARN [RS:0;jenkins-hbase4:33879.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:07,866 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C33879%2C1686178673189:(num 1686178673569) roll requested 2023-06-07 22:58:07,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33879] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:07,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33879] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36374 deadline: 1686178697865, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-07 22:58:07,875 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 newFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 2023-06-07 22:58:07,875 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-07 22:58:07,875 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 2023-06-07 22:58:07,876 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42907,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK], DatanodeInfoWithStorage[127.0.0.1:34899,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] 2023-06-07 22:58:07,876 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:07,876 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 is not closed yet, will try archiving it next time 2023-06-07 22:58:07,876 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:19,921 INFO [Listener at localhost/38233] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-07 22:58:21,924 WARN [Listener at localhost/38233] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:58:21,926 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:34899,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-07 22:58:21,926 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1555267459-172.31.14.131-1686178672581 (Datanode Uuid 23e681f6-171f-4497-b9de-1ca8d41eb2f9) service to localhost/127.0.0.1:41673 2023-06-07 22:58:21,927 WARN [DataStreamer for file /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 block BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42907,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK], DatanodeInfoWithStorage[127.0.0.1:34899,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:34899,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]) is bad. 2023-06-07 22:58:21,927 WARN [PacketResponder: BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34899]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:21,928 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:44782 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:42907:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44782 dst: /127.0.0.1:42907 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:21,928 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data2/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:21,928 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:21,931 INFO [Listener at localhost/38233] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:22,037 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:46986 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:34899:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46986 dst: /127.0.0.1:34899 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:22,044 WARN [Listener at localhost/38233] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:58:22,047 WARN [Listener at localhost/38233] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:22,048 INFO [Listener at localhost/38233] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:22,053 INFO [Listener at localhost/38233] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_34799_datanode____cl8jgd/webapp 2023-06-07 22:58:22,143 INFO [Listener at localhost/38233] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34799 2023-06-07 22:58:22,152 WARN [Listener at localhost/44533] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:22,155 WARN [Listener at localhost/44533] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:58:22,155 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:58:22,159 INFO [Listener at localhost/44533] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:22,218 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2c34a514eac8b4: Processing first storage report for DS-b0864fed-8f4a-46f7-b317-d68038e3214e from datanode 23e681f6-171f-4497-b9de-1ca8d41eb2f9 2023-06-07 22:58:22,218 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2c34a514eac8b4: from storage DS-b0864fed-8f4a-46f7-b317-d68038e3214e node DatanodeRegistration(127.0.0.1:40991, datanodeUuid=23e681f6-171f-4497-b9de-1ca8d41eb2f9, infoPort=38869, infoSecurePort=0, ipcPort=44533, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:22,218 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2c34a514eac8b4: Processing first storage report for DS-7c595789-eec4-408a-82be-7774d3a1b3a9 from datanode 23e681f6-171f-4497-b9de-1ca8d41eb2f9 2023-06-07 22:58:22,218 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2c34a514eac8b4: from storage DS-7c595789-eec4-408a-82be-7774d3a1b3a9 node DatanodeRegistration(127.0.0.1:40991, datanodeUuid=23e681f6-171f-4497-b9de-1ca8d41eb2f9, infoPort=38869, infoSecurePort=0, ipcPort=44533, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:22,262 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1967949403_17 at /127.0.0.1:57026 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:42907:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57026 dst: /127.0.0.1:42907 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:22,264 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:58:22,264 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1555267459-172.31.14.131-1686178672581 (Datanode Uuid 4e811068-9e92-4937-b0c3-94178167fa7b) service to localhost/127.0.0.1:41673 2023-06-07 22:58:22,265 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data3/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:22,265 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data4/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:22,271 WARN [Listener at localhost/44533] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:58:22,273 WARN [Listener at localhost/44533] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:22,274 INFO [Listener at localhost/44533] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:22,279 INFO [Listener at localhost/44533] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/java.io.tmpdir/Jetty_localhost_36937_datanode____yeo4jl/webapp 2023-06-07 22:58:22,370 INFO [Listener at localhost/44533] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36937 2023-06-07 22:58:22,376 WARN [Listener at localhost/33779] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:22,439 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb0f875ffddc9b5a: Processing first storage report for DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41 from datanode 4e811068-9e92-4937-b0c3-94178167fa7b 2023-06-07 22:58:22,439 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb0f875ffddc9b5a: from storage DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41 node DatanodeRegistration(127.0.0.1:33565, datanodeUuid=4e811068-9e92-4937-b0c3-94178167fa7b, infoPort=32905, infoSecurePort=0, ipcPort=33779, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:22,439 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb0f875ffddc9b5a: Processing first storage report for DS-30aed949-1f0c-449e-8b3f-e8787fd2d7f8 from datanode 4e811068-9e92-4937-b0c3-94178167fa7b 2023-06-07 22:58:22,440 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb0f875ffddc9b5a: from storage DS-30aed949-1f0c-449e-8b3f-e8787fd2d7f8 node DatanodeRegistration(127.0.0.1:33565, datanodeUuid=4e811068-9e92-4937-b0c3-94178167fa7b, infoPort=32905, infoSecurePort=0, ipcPort=33779, storageInfo=lv=-57;cid=testClusterID;nsid=1288615074;c=1686178672581), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:23,332 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,332 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35059%2C1686178673144:(num 1686178673265) roll requested 2023-06-07 22:58:23,332 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,333 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,339 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-07 22:58:23,339 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144/jenkins-hbase4.apache.org%2C35059%2C1686178673144.1686178673265 with entries=88, filesize=43.79 KB; new WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144/jenkins-hbase4.apache.org%2C35059%2C1686178673144.1686178703332 2023-06-07 22:58:23,339 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33565,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK], DatanodeInfoWithStorage[127.0.0.1:40991,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] 2023-06-07 22:58:23,339 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144/jenkins-hbase4.apache.org%2C35059%2C1686178673144.1686178673265 is not closed yet, will try archiving it next time 2023-06-07 22:58:23,339 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,339 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144/jenkins-hbase4.apache.org%2C35059%2C1686178673144.1686178673265; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,380 INFO [Listener at localhost/33779] wal.TestLogRolling(498): Data Nodes restarted 2023-06-07 22:58:23,381 INFO [Listener at localhost/33779] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-07 22:58:23,382 WARN [RS:0;jenkins-hbase4:33879.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42907,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,383 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C33879%2C1686178673189:(num 1686178687867) roll requested 2023-06-07 22:58:23,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33879] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42907,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33879] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36374 deadline: 1686178713382, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-07 22:58:23,392 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 newFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 2023-06-07 22:58:23,392 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-07 22:58:23,393 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 2023-06-07 22:58:23,393 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42907,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,393 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33565,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK], DatanodeInfoWithStorage[127.0.0.1:40991,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] 2023-06-07 22:58:23,393 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42907,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:23,393 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 is not closed yet, will try archiving it next time 2023-06-07 22:58:35,482 DEBUG [Listener at localhost/33779] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 newFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 2023-06-07 22:58:35,483 INFO [Listener at localhost/33779] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 2023-06-07 22:58:35,487 DEBUG [Listener at localhost/33779] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40991,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:33565,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] 2023-06-07 22:58:35,487 DEBUG [Listener at localhost/33779] wal.AbstractFSWAL(716): hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 is not closed yet, will try archiving it next time 2023-06-07 22:58:35,487 DEBUG [Listener at localhost/33779] wal.TestLogRolling(512): recovering lease for hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 2023-06-07 22:58:35,488 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 2023-06-07 22:58:35,491 WARN [IPC Server handler 0 on default port 41673] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1014 2023-06-07 22:58:35,493 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 after 5ms 2023-06-07 22:58:36,463 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@7934bdf6] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1555267459-172.31.14.131-1686178672581:blk_1073741832_1014, datanode=DatanodeInfoWithStorage[127.0.0.1:33565,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1014, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data4/current/BP-1555267459-172.31.14.131-1686178672581/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:39,494 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 after 4006ms 2023-06-07 22:58:39,494 DEBUG [Listener at localhost/33779] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178673569 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686178674117/Put/vlen=175/seqid=0] 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #4: [default/info:d/1686178674162/Put/vlen=9/seqid=0] 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #5: [hbase/info:d/1686178674186/Put/vlen=7/seqid=0] 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686178674685/Put/vlen=231/seqid=0] 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #4: [row1002/info:/1686178684337/Put/vlen=1045/seqid=0] 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.ProtobufLogReader(420): EOF at position 2160 2023-06-07 22:58:39,503 DEBUG [Listener at localhost/33779] wal.TestLogRolling(512): recovering lease for hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 2023-06-07 22:58:39,503 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 2023-06-07 22:58:39,504 WARN [IPC Server handler 1 on default port 41673] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-07 22:58:39,504 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 after 0ms 2023-06-07 22:58:40,443 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@5cb23d5d] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1555267459-172.31.14.131-1686178672581:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:40991,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current/BP-1555267459-172.31.14.131-1686178672581/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current/BP-1555267459-172.31.14.131-1686178672581/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-07 22:58:43,505 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 after 4001ms 2023-06-07 22:58:43,505 DEBUG [Listener at localhost/33779] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178687867 2023-06-07 22:58:43,509 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #6: [row1003/info:/1686178697918/Put/vlen=1045/seqid=0] 2023-06-07 22:58:43,509 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #7: [row1004/info:/1686178699922/Put/vlen=1045/seqid=0] 2023-06-07 22:58:43,509 DEBUG [Listener at localhost/33779] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-07 22:58:43,509 DEBUG [Listener at localhost/33779] wal.TestLogRolling(512): recovering lease for hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 2023-06-07 22:58:43,509 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 2023-06-07 22:58:43,510 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 after 1ms 2023-06-07 22:58:43,510 DEBUG [Listener at localhost/33779] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178703383 2023-06-07 22:58:43,513 DEBUG [Listener at localhost/33779] wal.TestLogRolling(522): #9: [row1005/info:/1686178713469/Put/vlen=1045/seqid=0] 2023-06-07 22:58:43,513 DEBUG [Listener at localhost/33779] wal.TestLogRolling(512): recovering lease for hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 2023-06-07 22:58:43,513 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 2023-06-07 22:58:43,513 WARN [IPC Server handler 2 on default port 41673] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-07 22:58:43,514 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 after 1ms 2023-06-07 22:58:44,441 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-264016589_17 at /127.0.0.1:52030 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:40991:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52030 dst: /127.0.0.1:40991 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40991 remote=/127.0.0.1:52030]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:44,443 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-264016589_17 at /127.0.0.1:55300 [Receiving block BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:33565:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55300 dst: /127.0.0.1:33565 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:44,442 WARN [ResponseProcessor for block BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-07 22:58:44,443 WARN [DataStreamer for file /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 block BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40991,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:33565,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40991,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]) is bad. 2023-06-07 22:58:44,448 WARN [DataStreamer for file /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 block BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,514 INFO [Listener at localhost/33779] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 after 4001ms 2023-06-07 22:58:47,515 DEBUG [Listener at localhost/33779] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 2023-06-07 22:58:47,519 DEBUG [Listener at localhost/33779] wal.ProtobufLogReader(420): EOF at position 83 2023-06-07 22:58:47,520 INFO [Listener at localhost/33779] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-06-07 22:58:47,520 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,520 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta:.meta(num 1686178673694) roll requested 2023-06-07 22:58:47,520 DEBUG [Listener at localhost/33779] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-07 22:58:47,520 INFO [Listener at localhost/33779] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,521 INFO [Listener at localhost/33779] regionserver.HRegion(2745): Flushing 81f39faec7ee3fc5b5a5b09ae5ec5e8e 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-07 22:58:47,522 WARN [RS:0;jenkins-hbase4:33879.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,523 DEBUG [Listener at localhost/33779] regionserver.HRegion(2446): Flush status journal for 81f39faec7ee3fc5b5a5b09ae5ec5e8e: 2023-06-07 22:58:47,523 INFO [Listener at localhost/33779] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,525 INFO [Listener at localhost/33779] regionserver.HRegion(2745): Flushing ab4af849ba5b414b74f0c3df9f26e9f7 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-07 22:58:47,525 DEBUG [Listener at localhost/33779] regionserver.HRegion(2446): Flush status journal for ab4af849ba5b414b74f0c3df9f26e9f7: 2023-06-07 22:58:47,525 INFO [Listener at localhost/33779] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,540 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-07 22:58:47,540 INFO [Listener at localhost/33779] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-07 22:58:47,541 DEBUG [Listener at localhost/33779] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2fbdd531 to 127.0.0.1:56337 2023-06-07 22:58:47,541 DEBUG [Listener at localhost/33779] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:58:47,542 DEBUG [Listener at localhost/33779] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-07 22:58:47,542 DEBUG [Listener at localhost/33779] util.JVMClusterUtil(257): Found active master hash=317741552, stopped=false 2023-06-07 22:58:47,542 INFO [Listener at localhost/33779] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:58:47,543 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-07 22:58:47,543 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta.1686178673694.meta with entries=11, filesize=3.72 KB; new WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta.1686178727520.meta 2023-06-07 22:58:47,543 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40991,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK], DatanodeInfoWithStorage[127.0.0.1:33565,DS-cd4f19f6-b5d6-41f3-b83b-480c9c80cb41,DISK]] 2023-06-07 22:58:47,543 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta.1686178673694.meta is not closed yet, will try archiving it next time 2023-06-07 22:58:47,543 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,543 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C33879%2C1686178673189:(num 1686178715472) roll requested 2023-06-07 22:58:47,543 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.meta.1686178673694.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33979,DS-b0864fed-8f4a-46f7-b317-d68038e3214e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,544 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:58:47,544 INFO [Listener at localhost/33779] procedure2.ProcedureExecutor(629): Stopping 2023-06-07 22:58:47,544 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:58:47,545 DEBUG [Listener at localhost/33779] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5fc64d92 to 127.0.0.1:56337 2023-06-07 22:58:47,545 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:47,545 DEBUG [Listener at localhost/33779] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:58:47,545 INFO [Listener at localhost/33779] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33879,1686178673189' ***** 2023-06-07 22:58:47,545 INFO [Listener at localhost/33779] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 22:58:47,545 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:58:47,545 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:58:47,546 INFO [RS:0;jenkins-hbase4:33879] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 22:58:47,546 INFO [RS:0;jenkins-hbase4:33879] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(3303): Received CLOSE for 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(3303): Received CLOSE for ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:58:47,547 DEBUG [RS:0;jenkins-hbase4:33879] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1737f6e4 to 127.0.0.1:56337 2023-06-07 22:58:47,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 81f39faec7ee3fc5b5a5b09ae5ec5e8e, disabling compactions & flushes 2023-06-07 22:58:47,547 DEBUG [RS:0;jenkins-hbase4:33879] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:58:47,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 22:58:47,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 22:58:47,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. after waiting 0 ms 2023-06-07 22:58:47,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 81f39faec7ee3fc5b5a5b09ae5ec5e8e 1/1 column families, dataSize=78 B heapSize=728 B 2023-06-07 22:58:47,547 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-07 22:58:47,548 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 81f39faec7ee3fc5b5a5b09ae5ec5e8e=hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e., ab4af849ba5b414b74f0c3df9f26e9f7=TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7.} 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:58:47,548 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-07 22:58:47,548 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:58:47,548 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1504): Waiting on 1588230740, 81f39faec7ee3fc5b5a5b09ae5ec5e8e, ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 81f39faec7ee3fc5b5a5b09ae5ec5e8e: 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:58:47,548 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,33879,1686178673189: Unrecoverable exception while closing hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,548 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:58:47,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-07 22:58:47,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-07 22:58:47,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-07 22:58:47,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-07 22:58:47,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1040711680, "init": 513802240, "max": 2051014656, "used": 398382648 }, "NonHeapMemoryUsage": { "committed": 139091968, "init": 2555904, "max": -1, "used": 136565160 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-07 22:58:47,550 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 22:58:47,551 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35059] master.MasterRpcServices(609): jenkins-hbase4.apache.org,33879,1686178673189 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,33879,1686178673189: Unrecoverable exception while closing hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,552 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 newFile=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178727543 2023-06-07 22:58:47,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab4af849ba5b414b74f0c3df9f26e9f7, disabling compactions & flushes 2023-06-07 22:58:47,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,552 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-07 22:58:47,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. after waiting 0 ms 2023-06-07 22:58:47,552 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178727543 2023-06-07 22:58:47,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,552 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab4af849ba5b414b74f0c3df9f26e9f7: 2023-06-07 22:58:47,552 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472 failed. Cause="Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-07 22:58:47,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,553 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,553 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189/jenkins-hbase4.apache.org%2C33879%2C1686178673189.1686178715472, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1555267459-172.31.14.131-1686178672581:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-07 22:58:47,553 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:58:47,558 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/WALs/jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:58:47,558 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-07 22:58:47,559 DEBUG [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-06-07 22:58:47,748 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 22:58:47,748 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(3303): Received CLOSE for 81f39faec7ee3fc5b5a5b09ae5ec5e8e 2023-06-07 22:58:47,748 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(3303): Received CLOSE for ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:58:47,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:58:47,748 DEBUG [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1504): Waiting on 1588230740, 81f39faec7ee3fc5b5a5b09ae5ec5e8e, ab4af849ba5b414b74f0c3df9f26e9f7 2023-06-07 22:58:47,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:58:47,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 81f39faec7ee3fc5b5a5b09ae5ec5e8e, disabling compactions & flushes 2023-06-07 22:58:47,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:58:47,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. after waiting 0 ms 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 81f39faec7ee3fc5b5a5b09ae5ec5e8e: 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686178673749.81f39faec7ee3fc5b5a5b09ae5ec5e8e. 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab4af849ba5b414b74f0c3df9f26e9f7, disabling compactions & flushes 2023-06-07 22:58:47,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. after waiting 0 ms 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab4af849ba5b414b74f0c3df9f26e9f7: 2023-06-07 22:58:47,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686178674324.ab4af849ba5b414b74f0c3df9f26e9f7. 2023-06-07 22:58:47,948 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-07 22:58:47,949 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33879,1686178673189; all regions closed. 2023-06-07 22:58:47,949 DEBUG [RS:0;jenkins-hbase4:33879] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:58:47,949 INFO [RS:0;jenkins-hbase4:33879] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:58:47,949 INFO [RS:0;jenkins-hbase4:33879] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-07 22:58:47,949 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:58:47,950 INFO [RS:0;jenkins-hbase4:33879] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33879 2023-06-07 22:58:47,953 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33879,1686178673189 2023-06-07 22:58:47,953 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:58:47,953 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:58:47,954 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33879,1686178673189] 2023-06-07 22:58:47,954 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33879,1686178673189; numProcessing=1 2023-06-07 22:58:47,956 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33879,1686178673189 already deleted, retry=false 2023-06-07 22:58:47,956 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33879,1686178673189 expired; onlineServers=0 2023-06-07 22:58:47,956 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35059,1686178673144' ***** 2023-06-07 22:58:47,956 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-07 22:58:47,956 DEBUG [M:0;jenkins-hbase4:35059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16cfcf76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:58:47,956 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:58:47,956 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35059,1686178673144; all regions closed. 2023-06-07 22:58:47,956 DEBUG [M:0;jenkins-hbase4:35059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:58:47,956 DEBUG [M:0;jenkins-hbase4:35059] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-07 22:58:47,956 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-07 22:58:47,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178673335] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178673335,5,FailOnTimeoutGroup] 2023-06-07 22:58:47,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178673334] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178673334,5,FailOnTimeoutGroup] 2023-06-07 22:58:47,956 DEBUG [M:0;jenkins-hbase4:35059] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-07 22:58:47,957 INFO [M:0;jenkins-hbase4:35059] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-07 22:58:47,958 INFO [M:0;jenkins-hbase4:35059] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-07 22:58:47,958 INFO [M:0;jenkins-hbase4:35059] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-07 22:58:47,958 DEBUG [M:0;jenkins-hbase4:35059] master.HMaster(1512): Stopping service threads 2023-06-07 22:58:47,958 INFO [M:0;jenkins-hbase4:35059] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-07 22:58:47,958 ERROR [M:0;jenkins-hbase4:35059] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-07 22:58:47,958 INFO [M:0;jenkins-hbase4:35059] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-07 22:58:47,958 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-07 22:58:47,958 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-07 22:58:47,958 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:47,959 DEBUG [M:0;jenkins-hbase4:35059] zookeeper.ZKUtil(398): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-07 22:58:47,959 WARN [M:0;jenkins-hbase4:35059] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-07 22:58:47,959 INFO [M:0;jenkins-hbase4:35059] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-07 22:58:47,959 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:58:47,959 INFO [M:0;jenkins-hbase4:35059] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-07 22:58:47,960 DEBUG [M:0;jenkins-hbase4:35059] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:58:47,960 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:47,960 DEBUG [M:0;jenkins-hbase4:35059] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:47,960 DEBUG [M:0;jenkins-hbase4:35059] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:58:47,960 DEBUG [M:0;jenkins-hbase4:35059] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:47,960 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.16 KB heapSize=45.78 KB 2023-06-07 22:58:47,971 INFO [M:0;jenkins-hbase4:35059] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.16 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6078c998be2a45dfb6364f359b9208ea 2023-06-07 22:58:47,977 DEBUG [M:0;jenkins-hbase4:35059] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6078c998be2a45dfb6364f359b9208ea as hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6078c998be2a45dfb6364f359b9208ea 2023-06-07 22:58:47,982 INFO [M:0;jenkins-hbase4:35059] regionserver.HStore(1080): Added hdfs://localhost:41673/user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6078c998be2a45dfb6364f359b9208ea, entries=11, sequenceid=92, filesize=7.0 K 2023-06-07 22:58:47,983 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegion(2948): Finished flush of dataSize ~38.16 KB/39075, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=92, compaction requested=false 2023-06-07 22:58:47,984 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:47,985 DEBUG [M:0;jenkins-hbase4:35059] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:58:47,985 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0ae94336-7214-c196-7f79-d90dae7f3984/MasterData/WALs/jenkins-hbase4.apache.org,35059,1686178673144 2023-06-07 22:58:47,988 INFO [M:0;jenkins-hbase4:35059] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-07 22:58:47,988 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:58:47,989 INFO [M:0;jenkins-hbase4:35059] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35059 2023-06-07 22:58:47,992 DEBUG [M:0;jenkins-hbase4:35059] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35059,1686178673144 already deleted, retry=false 2023-06-07 22:58:48,054 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:58:48,054 INFO [RS:0;jenkins-hbase4:33879] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33879,1686178673189; zookeeper connection closed. 2023-06-07 22:58:48,055 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): regionserver:33879-0x100a78334eb0001, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:58:48,055 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@70ab622e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@70ab622e 2023-06-07 22:58:48,059 INFO [Listener at localhost/33779] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-07 22:58:48,155 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:58:48,155 INFO [M:0;jenkins-hbase4:35059] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35059,1686178673144; zookeeper connection closed. 2023-06-07 22:58:48,155 DEBUG [Listener at localhost/45411-EventThread] zookeeper.ZKWatcher(600): master:35059-0x100a78334eb0000, quorum=127.0.0.1:56337, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:58:48,156 WARN [Listener at localhost/33779] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:58:48,159 INFO [Listener at localhost/33779] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:48,263 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:58:48,263 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1555267459-172.31.14.131-1686178672581 (Datanode Uuid 4e811068-9e92-4937-b0c3-94178167fa7b) service to localhost/127.0.0.1:41673 2023-06-07 22:58:48,264 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data3/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:48,265 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data4/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:48,266 WARN [Listener at localhost/33779] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:58:48,269 INFO [Listener at localhost/33779] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:48,373 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:58:48,373 WARN [BP-1555267459-172.31.14.131-1686178672581 heartbeating to localhost/127.0.0.1:41673] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1555267459-172.31.14.131-1686178672581 (Datanode Uuid 23e681f6-171f-4497-b9de-1ca8d41eb2f9) service to localhost/127.0.0.1:41673 2023-06-07 22:58:48,374 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data1/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:48,374 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/cluster_14b2bacb-497a-1b04-43d6-6271e450d3da/dfs/data/data2/current/BP-1555267459-172.31.14.131-1686178672581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:58:48,386 INFO [Listener at localhost/33779] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:58:48,497 INFO [Listener at localhost/33779] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-07 22:58:48,510 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-07 22:58:48,520 INFO [Listener at localhost/33779] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 78) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41673 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:41673 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:41673 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:41673 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33779 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1695066929) connection to localhost/127.0.0.1:41673 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=463 (was 471), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=20 (was 51), ProcessCount=170 (was 170), AvailableMemoryMB=645 (was 822) 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=463, MaxFileDescriptor=60000, SystemLoadAverage=20, ProcessCount=170, AvailableMemoryMB=644 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/hadoop.log.dir so I do NOT create it in target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1034f849-6064-889a-5706-f9e162b675e8/hadoop.tmp.dir so I do NOT create it in target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f, deleteOnExit=true 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/test.cache.data in system properties and HBase conf 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/hadoop.tmp.dir in system properties and HBase conf 2023-06-07 22:58:48,528 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/hadoop.log.dir in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-07 22:58:48,529 DEBUG [Listener at localhost/33779] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:58:48,529 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/nfs.dump.dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/java.io.tmpdir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-07 22:58:48,530 INFO [Listener at localhost/33779] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-07 22:58:48,531 WARN [Listener at localhost/33779] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:58:48,534 WARN [Listener at localhost/33779] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:58:48,535 WARN [Listener at localhost/33779] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:58:48,575 WARN [Listener at localhost/33779] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:48,577 INFO [Listener at localhost/33779] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:48,581 INFO [Listener at localhost/33779] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/java.io.tmpdir/Jetty_localhost_35187_hdfs____87lwxk/webapp 2023-06-07 22:58:48,673 INFO [Listener at localhost/33779] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35187 2023-06-07 22:58:48,674 WARN [Listener at localhost/33779] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:58:48,677 WARN [Listener at localhost/33779] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:58:48,677 WARN [Listener at localhost/33779] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:58:48,718 WARN [Listener at localhost/39075] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:48,727 WARN [Listener at localhost/39075] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:58:48,729 WARN [Listener at localhost/39075] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:48,730 INFO [Listener at localhost/39075] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:48,736 INFO [Listener at localhost/39075] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/java.io.tmpdir/Jetty_localhost_37873_datanode____.d03dux/webapp 2023-06-07 22:58:48,827 INFO [Listener at localhost/39075] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37873 2023-06-07 22:58:48,832 WARN [Listener at localhost/41251] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:48,843 WARN [Listener at localhost/41251] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:58:48,845 WARN [Listener at localhost/41251] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:58:48,846 INFO [Listener at localhost/41251] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:58:48,849 INFO [Listener at localhost/41251] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/java.io.tmpdir/Jetty_localhost_40279_datanode____vu47lr/webapp 2023-06-07 22:58:48,919 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d53216f4b9d2aed: Processing first storage report for DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f from datanode 2640536c-3547-408d-b0dd-c2ffcab63536 2023-06-07 22:58:48,919 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d53216f4b9d2aed: from storage DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f node DatanodeRegistration(127.0.0.1:46301, datanodeUuid=2640536c-3547-408d-b0dd-c2ffcab63536, infoPort=41501, infoSecurePort=0, ipcPort=41251, storageInfo=lv=-57;cid=testClusterID;nsid=1628459075;c=1686178728537), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:48,919 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d53216f4b9d2aed: Processing first storage report for DS-80cbf9e9-168f-42da-9ffc-e8bfdf7e3c2c from datanode 2640536c-3547-408d-b0dd-c2ffcab63536 2023-06-07 22:58:48,919 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d53216f4b9d2aed: from storage DS-80cbf9e9-168f-42da-9ffc-e8bfdf7e3c2c node DatanodeRegistration(127.0.0.1:46301, datanodeUuid=2640536c-3547-408d-b0dd-c2ffcab63536, infoPort=41501, infoSecurePort=0, ipcPort=41251, storageInfo=lv=-57;cid=testClusterID;nsid=1628459075;c=1686178728537), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-07 22:58:48,945 INFO [Listener at localhost/41251] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40279 2023-06-07 22:58:48,953 WARN [Listener at localhost/46415] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:58:49,044 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf1cd0ae7dc043e57: Processing first storage report for DS-b554bb39-25a8-436c-9176-ad3897a8db1c from datanode 2f19f854-08dd-4313-8002-35476ef314c9 2023-06-07 22:58:49,044 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf1cd0ae7dc043e57: from storage DS-b554bb39-25a8-436c-9176-ad3897a8db1c node DatanodeRegistration(127.0.0.1:46781, datanodeUuid=2f19f854-08dd-4313-8002-35476ef314c9, infoPort=35631, infoSecurePort=0, ipcPort=46415, storageInfo=lv=-57;cid=testClusterID;nsid=1628459075;c=1686178728537), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:49,044 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf1cd0ae7dc043e57: Processing first storage report for DS-47769623-3c9c-4478-be75-fca48835aa0b from datanode 2f19f854-08dd-4313-8002-35476ef314c9 2023-06-07 22:58:49,044 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf1cd0ae7dc043e57: from storage DS-47769623-3c9c-4478-be75-fca48835aa0b node DatanodeRegistration(127.0.0.1:46781, datanodeUuid=2f19f854-08dd-4313-8002-35476ef314c9, infoPort=35631, infoSecurePort=0, ipcPort=46415, storageInfo=lv=-57;cid=testClusterID;nsid=1628459075;c=1686178728537), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:58:49,063 DEBUG [Listener at localhost/46415] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d 2023-06-07 22:58:49,066 INFO [Listener at localhost/46415] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/zookeeper_0, clientPort=63966, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-07 22:58:49,066 INFO [Listener at localhost/46415] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63966 2023-06-07 22:58:49,067 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,068 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,080 INFO [Listener at localhost/46415] util.FSUtils(471): Created version file at hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53 with version=8 2023-06-07 22:58:49,080 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase-staging 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:58:49,082 INFO [Listener at localhost/46415] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:58:49,083 INFO [Listener at localhost/46415] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41055 2023-06-07 22:58:49,084 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,085 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,085 INFO [Listener at localhost/46415] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41055 connecting to ZooKeeper ensemble=127.0.0.1:63966 2023-06-07 22:58:49,091 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:410550x0, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:58:49,092 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41055-0x100a7840f6c0000 connected 2023-06-07 22:58:49,106 DEBUG [Listener at localhost/46415] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:58:49,106 DEBUG [Listener at localhost/46415] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:58:49,107 DEBUG [Listener at localhost/46415] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:58:49,107 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41055 2023-06-07 22:58:49,107 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41055 2023-06-07 22:58:49,108 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41055 2023-06-07 22:58:49,109 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41055 2023-06-07 22:58:49,109 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41055 2023-06-07 22:58:49,109 INFO [Listener at localhost/46415] master.HMaster(444): hbase.rootdir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53, hbase.cluster.distributed=false 2023-06-07 22:58:49,121 INFO [Listener at localhost/46415] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:58:49,122 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:58:49,122 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:58:49,122 INFO [Listener at localhost/46415] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:58:49,122 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:58:49,122 INFO [Listener at localhost/46415] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:58:49,122 INFO [Listener at localhost/46415] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:58:49,123 INFO [Listener at localhost/46415] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41153 2023-06-07 22:58:49,124 INFO [Listener at localhost/46415] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 22:58:49,124 DEBUG [Listener at localhost/46415] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 22:58:49,125 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,126 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,126 INFO [Listener at localhost/46415] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41153 connecting to ZooKeeper ensemble=127.0.0.1:63966 2023-06-07 22:58:49,129 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:411530x0, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:58:49,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41153-0x100a7840f6c0001 connected 2023-06-07 22:58:49,131 DEBUG [Listener at localhost/46415] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:58:49,131 DEBUG [Listener at localhost/46415] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:58:49,132 DEBUG [Listener at localhost/46415] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:58:49,134 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41153 2023-06-07 22:58:49,134 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41153 2023-06-07 22:58:49,134 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41153 2023-06-07 22:58:49,135 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41153 2023-06-07 22:58:49,135 DEBUG [Listener at localhost/46415] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41153 2023-06-07 22:58:49,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,137 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:58:49,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,144 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:58:49,144 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:58:49,144 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,145 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:58:49,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41055,1686178729081 from backup master directory 2023-06-07 22:58:49,145 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:58:49,147 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,147 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:58:49,147 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:58:49,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/hbase.id with ID: 1473c164-bde4-477c-a411-2f7f7ff03ee5 2023-06-07 22:58:49,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:49,173 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3b011774 to 127.0.0.1:63966 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:58:49,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61853917, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:58:49,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:58:49,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-07 22:58:49,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:58:49,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store-tmp 2023-06-07 22:58:49,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:49,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:58:49,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:49,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:49,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:58:49,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:49,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:58:49,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:58:49,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/WALs/jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41055%2C1686178729081, suffix=, logDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/WALs/jenkins-hbase4.apache.org,41055,1686178729081, archiveDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/oldWALs, maxLogs=10 2023-06-07 22:58:49,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/WALs/jenkins-hbase4.apache.org,41055,1686178729081/jenkins-hbase4.apache.org%2C41055%2C1686178729081.1686178729200 2023-06-07 22:58:49,205 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46301,DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f,DISK], DatanodeInfoWithStorage[127.0.0.1:46781,DS-b554bb39-25a8-436c-9176-ad3897a8db1c,DISK]] 2023-06-07 22:58:49,205 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:58:49,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:49,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:58:49,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:58:49,207 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:58:49,209 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-07 22:58:49,209 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-07 22:58:49,209 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:58:49,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:58:49,213 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:58:49,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:58:49,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727960, jitterRate=-0.07435187697410583}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:58:49,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:58:49,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-07 22:58:49,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-07 22:58:49,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-07 22:58:49,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-07 22:58:49,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-07 22:58:49,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-07 22:58:49,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-07 22:58:49,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-07 22:58:49,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-07 22:58:49,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-07 22:58:49,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-07 22:58:49,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-07 22:58:49,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-07 22:58:49,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-07 22:58:49,233 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-07 22:58:49,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-07 22:58:49,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-07 22:58:49,235 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:58:49,236 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:58:49,236 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41055,1686178729081, sessionid=0x100a7840f6c0000, setting cluster-up flag (Was=false) 2023-06-07 22:58:49,240 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-07 22:58:49,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,252 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-07 22:58:49,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:49,257 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.hbase-snapshot/.tmp 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:58:49,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686178759261 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-07 22:58:49,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,262 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:58:49,263 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-07 22:58:49,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-07 22:58:49,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-07 22:58:49,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-07 22:58:49,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-07 22:58:49,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-07 22:58:49,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178729263,5,FailOnTimeoutGroup] 2023-06-07 22:58:49,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178729264,5,FailOnTimeoutGroup] 2023-06-07 22:58:49,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,264 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:58:49,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-07 22:58:49,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,273 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:58:49,273 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:58:49,273 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53 2023-06-07 22:58:49,280 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:49,282 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:58:49,283 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/info 2023-06-07 22:58:49,283 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:58:49,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:58:49,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:58:49,286 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:58:49,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:58:49,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/table 2023-06-07 22:58:49,287 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:58:49,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,288 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740 2023-06-07 22:58:49,289 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740 2023-06-07 22:58:49,291 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:58:49,292 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:58:49,293 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:58:49,294 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=851614, jitterRate=0.08288387954235077}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:58:49,294 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:58:49,294 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:58:49,294 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:58:49,294 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:58:49,294 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:58:49,294 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:58:49,294 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:58:49,295 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:58:49,296 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:58:49,296 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-07 22:58:49,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-07 22:58:49,297 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-07 22:58:49,298 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-07 22:58:49,336 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(951): ClusterId : 1473c164-bde4-477c-a411-2f7f7ff03ee5 2023-06-07 22:58:49,338 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 22:58:49,341 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 22:58:49,341 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 22:58:49,343 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 22:58:49,344 DEBUG [RS:0;jenkins-hbase4:41153] zookeeper.ReadOnlyZKClient(139): Connect 0x26f3714c to 127.0.0.1:63966 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:58:49,347 DEBUG [RS:0;jenkins-hbase4:41153] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28348be2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:58:49,347 DEBUG [RS:0;jenkins-hbase4:41153] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c0b1a3c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:58:49,356 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41153 2023-06-07 22:58:49,356 INFO [RS:0;jenkins-hbase4:41153] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 22:58:49,356 INFO [RS:0;jenkins-hbase4:41153] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 22:58:49,356 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 22:58:49,357 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,41055,1686178729081 with isa=jenkins-hbase4.apache.org/172.31.14.131:41153, startcode=1686178729121 2023-06-07 22:58:49,357 DEBUG [RS:0;jenkins-hbase4:41153] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 22:58:49,360 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43321, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 22:58:49,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,362 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53 2023-06-07 22:58:49,362 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39075 2023-06-07 22:58:49,362 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 22:58:49,364 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:58:49,364 DEBUG [RS:0;jenkins-hbase4:41153] zookeeper.ZKUtil(162): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,364 WARN [RS:0;jenkins-hbase4:41153] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:58:49,364 INFO [RS:0;jenkins-hbase4:41153] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:58:49,364 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1946): logDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,365 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41153,1686178729121] 2023-06-07 22:58:49,370 DEBUG [RS:0;jenkins-hbase4:41153] zookeeper.ZKUtil(162): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,370 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 22:58:49,371 INFO [RS:0;jenkins-hbase4:41153] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 22:58:49,372 INFO [RS:0;jenkins-hbase4:41153] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 22:58:49,372 INFO [RS:0;jenkins-hbase4:41153] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:58:49,372 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,373 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 22:58:49,374 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,374 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,374 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,374 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,374 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,374 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,374 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:58:49,375 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,375 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,375 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,375 DEBUG [RS:0;jenkins-hbase4:41153] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:58:49,375 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,375 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,375 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,387 INFO [RS:0;jenkins-hbase4:41153] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 22:58:49,388 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41153,1686178729121-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,398 INFO [RS:0;jenkins-hbase4:41153] regionserver.Replication(203): jenkins-hbase4.apache.org,41153,1686178729121 started 2023-06-07 22:58:49,398 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41153,1686178729121, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41153, sessionid=0x100a7840f6c0001 2023-06-07 22:58:49,398 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 22:58:49,398 DEBUG [RS:0;jenkins-hbase4:41153] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,398 DEBUG [RS:0;jenkins-hbase4:41153] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41153,1686178729121' 2023-06-07 22:58:49,398 DEBUG [RS:0;jenkins-hbase4:41153] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:58:49,399 DEBUG [RS:0;jenkins-hbase4:41153] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:58:49,399 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 22:58:49,399 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 22:58:49,399 DEBUG [RS:0;jenkins-hbase4:41153] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,399 DEBUG [RS:0;jenkins-hbase4:41153] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41153,1686178729121' 2023-06-07 22:58:49,399 DEBUG [RS:0;jenkins-hbase4:41153] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 22:58:49,400 DEBUG [RS:0;jenkins-hbase4:41153] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 22:58:49,400 DEBUG [RS:0;jenkins-hbase4:41153] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 22:58:49,400 INFO [RS:0;jenkins-hbase4:41153] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 22:58:49,400 INFO [RS:0;jenkins-hbase4:41153] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 22:58:49,444 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:58:49,449 DEBUG [jenkins-hbase4:41055] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-07 22:58:49,449 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41153,1686178729121, state=OPENING 2023-06-07 22:58:49,452 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-07 22:58:49,453 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:49,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41153,1686178729121}] 2023-06-07 22:58:49,454 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:58:49,502 INFO [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41153%2C1686178729121, suffix=, logDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121, archiveDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/oldWALs, maxLogs=32 2023-06-07 22:58:49,510 INFO [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178729503 2023-06-07 22:58:49,510 DEBUG [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46301,DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f,DISK], DatanodeInfoWithStorage[127.0.0.1:46781,DS-b554bb39-25a8-436c-9176-ad3897a8db1c,DISK]] 2023-06-07 22:58:49,608 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,608 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 22:58:49,610 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 22:58:49,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-07 22:58:49,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:58:49,616 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41153%2C1686178729121.meta, suffix=.meta, logDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121, archiveDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/oldWALs, maxLogs=32 2023-06-07 22:58:49,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.meta.1686178729617.meta 2023-06-07 22:58:49,626 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46781,DS-b554bb39-25a8-436c-9176-ad3897a8db1c,DISK], DatanodeInfoWithStorage[127.0.0.1:46301,DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f,DISK]] 2023-06-07 22:58:49,626 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:58:49,626 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-07 22:58:49,626 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-07 22:58:49,627 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-07 22:58:49,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-07 22:58:49,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:49,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-07 22:58:49,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-07 22:58:49,629 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:58:49,630 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/info 2023-06-07 22:58:49,630 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/info 2023-06-07 22:58:49,630 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:58:49,631 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,631 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:58:49,632 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:58:49,632 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:58:49,632 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:58:49,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:58:49,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/table 2023-06-07 22:58:49,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/table 2023-06-07 22:58:49,634 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:58:49,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:49,635 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740 2023-06-07 22:58:49,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740 2023-06-07 22:58:49,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:58:49,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:58:49,641 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727572, jitterRate=-0.07484522461891174}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:58:49,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:58:49,644 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686178729608 2023-06-07 22:58:49,648 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-07 22:58:49,648 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-07 22:58:49,649 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41153,1686178729121, state=OPEN 2023-06-07 22:58:49,651 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-07 22:58:49,651 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:58:49,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-07 22:58:49,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41153,1686178729121 in 197 msec 2023-06-07 22:58:49,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-07 22:58:49,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 357 msec 2023-06-07 22:58:49,658 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 398 msec 2023-06-07 22:58:49,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686178729658, completionTime=-1 2023-06-07 22:58:49,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-07 22:58:49,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-07 22:58:49,661 DEBUG [hconnection-0x56f54ab4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:58:49,662 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:58:49,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-07 22:58:49,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686178789664 2023-06-07 22:58:49,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686178849664 2023-06-07 22:58:49,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41055,1686178729081-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41055,1686178729081-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41055,1686178729081-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41055, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-07 22:58:49,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:58:49,673 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-07 22:58:49,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-07 22:58:49,675 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:58:49,676 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:58:49,678 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:49,679 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d empty. 2023-06-07 22:58:49,679 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:49,679 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-07 22:58:49,693 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-07 22:58:49,695 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4e68d662b1c7282eaff433c7362d7d3d, NAME => 'hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp 2023-06-07 22:58:49,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:49,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4e68d662b1c7282eaff433c7362d7d3d, disabling compactions & flushes 2023-06-07 22:58:49,713 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:49,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:49,714 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. after waiting 0 ms 2023-06-07 22:58:49,714 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:49,714 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:49,714 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4e68d662b1c7282eaff433c7362d7d3d: 2023-06-07 22:58:49,717 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:58:49,718 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178729717"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178729717"}]},"ts":"1686178729717"} 2023-06-07 22:58:49,720 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:58:49,721 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:58:49,722 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178729722"}]},"ts":"1686178729722"} 2023-06-07 22:58:49,723 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-07 22:58:49,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4e68d662b1c7282eaff433c7362d7d3d, ASSIGN}] 2023-06-07 22:58:49,733 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4e68d662b1c7282eaff433c7362d7d3d, ASSIGN 2023-06-07 22:58:49,734 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4e68d662b1c7282eaff433c7362d7d3d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41153,1686178729121; forceNewPlan=false, retain=false 2023-06-07 22:58:49,886 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4e68d662b1c7282eaff433c7362d7d3d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:49,886 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178729886"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178729886"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178729886"}]},"ts":"1686178729886"} 2023-06-07 22:58:49,888 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 4e68d662b1c7282eaff433c7362d7d3d, server=jenkins-hbase4.apache.org,41153,1686178729121}] 2023-06-07 22:58:50,045 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:50,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4e68d662b1c7282eaff433c7362d7d3d, NAME => 'hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:58:50,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:50,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,047 INFO [StoreOpener-4e68d662b1c7282eaff433c7362d7d3d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,048 DEBUG [StoreOpener-4e68d662b1c7282eaff433c7362d7d3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/info 2023-06-07 22:58:50,049 DEBUG [StoreOpener-4e68d662b1c7282eaff433c7362d7d3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/info 2023-06-07 22:58:50,049 INFO [StoreOpener-4e68d662b1c7282eaff433c7362d7d3d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4e68d662b1c7282eaff433c7362d7d3d columnFamilyName info 2023-06-07 22:58:50,049 INFO [StoreOpener-4e68d662b1c7282eaff433c7362d7d3d-1] regionserver.HStore(310): Store=4e68d662b1c7282eaff433c7362d7d3d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:50,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,053 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:58:50,057 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:58:50,058 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4e68d662b1c7282eaff433c7362d7d3d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=802674, jitterRate=0.0206536203622818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:58:50,058 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4e68d662b1c7282eaff433c7362d7d3d: 2023-06-07 22:58:50,059 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d., pid=6, masterSystemTime=1686178730040 2023-06-07 22:58:50,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:50,061 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:58:50,062 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4e68d662b1c7282eaff433c7362d7d3d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:50,062 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178730062"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178730062"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178730062"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178730062"}]},"ts":"1686178730062"} 2023-06-07 22:58:50,066 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-07 22:58:50,066 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 4e68d662b1c7282eaff433c7362d7d3d, server=jenkins-hbase4.apache.org,41153,1686178729121 in 176 msec 2023-06-07 22:58:50,068 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-07 22:58:50,068 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4e68d662b1c7282eaff433c7362d7d3d, ASSIGN in 336 msec 2023-06-07 22:58:50,069 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:58:50,069 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178730069"}]},"ts":"1686178730069"} 2023-06-07 22:58:50,071 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-07 22:58:50,074 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:58:50,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-07 22:58:50,076 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:58:50,076 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 402 msec 2023-06-07 22:58:50,076 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:50,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-07 22:58:50,088 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:58:50,092 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-07 22:58:50,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-07 22:58:50,109 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:58:50,112 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-07 22:58:50,125 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-07 22:58:50,128 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-07 22:58:50,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.981sec 2023-06-07 22:58:50,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-07 22:58:50,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-07 22:58:50,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-07 22:58:50,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41055,1686178729081-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-07 22:58:50,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41055,1686178729081-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-07 22:58:50,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-07 22:58:50,137 DEBUG [Listener at localhost/46415] zookeeper.ReadOnlyZKClient(139): Connect 0x4326b091 to 127.0.0.1:63966 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:58:50,141 DEBUG [Listener at localhost/46415] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60348279, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:58:50,142 DEBUG [hconnection-0x1e8de211-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:58:50,145 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57902, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:58:50,146 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:58:50,146 INFO [Listener at localhost/46415] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:58:50,150 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-07 22:58:50,150 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:58:50,151 INFO [Listener at localhost/46415] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-07 22:58:50,152 DEBUG [Listener at localhost/46415] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-07 22:58:50,155 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42734, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-07 22:58:50,156 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-07 22:58:50,156 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-07 22:58:50,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:58:50,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:58:50,161 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:58:50,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-07 22:58:50,163 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:58:50,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:58:50,164 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,165 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a empty. 2023-06-07 22:58:50,165 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,165 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-07 22:58:50,175 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-07 22:58:50,177 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => eb0570753fefb676a232a1a0e16d349a, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/.tmp 2023-06-07 22:58:50,184 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:50,184 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing eb0570753fefb676a232a1a0e16d349a, disabling compactions & flushes 2023-06-07 22:58:50,184 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,184 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,184 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. after waiting 0 ms 2023-06-07 22:58:50,184 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,184 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,184 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:58:50,186 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:58:50,187 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686178730187"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178730187"}]},"ts":"1686178730187"} 2023-06-07 22:58:50,189 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:58:50,190 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:58:50,190 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178730190"}]},"ts":"1686178730190"} 2023-06-07 22:58:50,191 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-07 22:58:50,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=eb0570753fefb676a232a1a0e16d349a, ASSIGN}] 2023-06-07 22:58:50,197 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=eb0570753fefb676a232a1a0e16d349a, ASSIGN 2023-06-07 22:58:50,198 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=eb0570753fefb676a232a1a0e16d349a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41153,1686178729121; forceNewPlan=false, retain=false 2023-06-07 22:58:50,349 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=eb0570753fefb676a232a1a0e16d349a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:50,349 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686178730348"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178730348"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178730348"}]},"ts":"1686178730348"} 2023-06-07 22:58:50,351 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure eb0570753fefb676a232a1a0e16d349a, server=jenkins-hbase4.apache.org,41153,1686178729121}] 2023-06-07 22:58:50,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb0570753fefb676a232a1a0e16d349a, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:58:50,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:58:50,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,509 INFO [StoreOpener-eb0570753fefb676a232a1a0e16d349a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,511 DEBUG [StoreOpener-eb0570753fefb676a232a1a0e16d349a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info 2023-06-07 22:58:50,511 DEBUG [StoreOpener-eb0570753fefb676a232a1a0e16d349a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info 2023-06-07 22:58:50,511 INFO [StoreOpener-eb0570753fefb676a232a1a0e16d349a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb0570753fefb676a232a1a0e16d349a columnFamilyName info 2023-06-07 22:58:50,512 INFO [StoreOpener-eb0570753fefb676a232a1a0e16d349a-1] regionserver.HStore(310): Store=eb0570753fefb676a232a1a0e16d349a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:58:50,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:58:50,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:58:50,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb0570753fefb676a232a1a0e16d349a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807836, jitterRate=0.027217403054237366}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:58:50,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:58:50,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a., pid=11, masterSystemTime=1686178730504 2023-06-07 22:58:50,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:58:50,521 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=eb0570753fefb676a232a1a0e16d349a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:58:50,521 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686178730521"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178730521"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178730521"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178730521"}]},"ts":"1686178730521"} 2023-06-07 22:58:50,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-07 22:58:50,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure eb0570753fefb676a232a1a0e16d349a, server=jenkins-hbase4.apache.org,41153,1686178729121 in 172 msec 2023-06-07 22:58:50,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-07 22:58:50,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=eb0570753fefb676a232a1a0e16d349a, ASSIGN in 330 msec 2023-06-07 22:58:50,528 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:58:50,528 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178730528"}]},"ts":"1686178730528"} 2023-06-07 22:58:50,530 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-07 22:58:50,533 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:58:50,535 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 377 msec 2023-06-07 22:58:55,176 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-07 22:58:55,371 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:00,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:59:00,164 INFO [Listener at localhost/46415] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-07 22:59:00,167 DEBUG [Listener at localhost/46415] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:00,167 DEBUG [Listener at localhost/46415] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:00,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-07 22:59:00,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-07 22:59:00,186 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-07 22:59:00,186 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:00,187 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-07 22:59:00,187 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-07 22:59:00,187 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,187 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-07 22:59:00,190 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:00,190 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,190 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:00,190 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:00,190 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,190 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-07 22:59:00,191 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-07 22:59:00,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,191 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-07 22:59:00,191 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-07 22:59:00,192 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-07 22:59:00,193 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-07 22:59:00,194 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-07 22:59:00,194 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:00,194 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-07 22:59:00,195 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-07 22:59:00,195 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-07 22:59:00,195 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:00,195 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. started... 2023-06-07 22:59:00,196 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4e68d662b1c7282eaff433c7362d7d3d 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-07 22:59:00,206 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/.tmp/info/90883ed6aa4b4588bde9433e76ab5aa2 2023-06-07 22:59:00,214 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/.tmp/info/90883ed6aa4b4588bde9433e76ab5aa2 as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/info/90883ed6aa4b4588bde9433e76ab5aa2 2023-06-07 22:59:00,219 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/info/90883ed6aa4b4588bde9433e76ab5aa2, entries=2, sequenceid=6, filesize=4.8 K 2023-06-07 22:59:00,219 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 4e68d662b1c7282eaff433c7362d7d3d in 23ms, sequenceid=6, compaction requested=false 2023-06-07 22:59:00,220 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4e68d662b1c7282eaff433c7362d7d3d: 2023-06-07 22:59:00,220 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:00,220 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-07 22:59:00,220 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-07 22:59:00,220 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,220 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-07 22:59:00,220 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-07 22:59:00,223 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,223 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,223 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,223 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:00,223 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:00,223 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,223 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-07 22:59:00,223 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:00,224 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:00,224 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-07 22:59:00,224 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,225 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:00,225 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-07 22:59:00,225 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-07 22:59:00,225 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@53e90040[Count = 0] remaining members to acquire global barrier 2023-06-07 22:59:00,225 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,227 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,227 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,227 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-07 22:59:00,228 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,228 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-07 22:59:00,228 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-07 22:59:00,228 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,41153,1686178729121' in zk 2023-06-07 22:59:00,229 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,229 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-07 22:59:00,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:00,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:00,229 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:00,230 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-07 22:59:00,230 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:00,230 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:00,231 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-07 22:59:00,231 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,231 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:00,231 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-07 22:59:00,232 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,232 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,41153,1686178729121': 2023-06-07 22:59:00,232 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-07 22:59:00,232 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-07 22:59:00,232 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-07 22:59:00,232 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-07 22:59:00,232 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-07 22:59:00,232 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-07 22:59:00,235 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,235 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,235 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,235 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:00,235 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:00,235 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,235 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:00,235 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,236 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:00,236 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:00,236 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:00,236 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,236 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,236 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-07 22:59:00,237 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:00,237 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-07 22:59:00,237 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,237 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,237 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:00,238 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-07 22:59:00,238 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,243 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,243 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:00,243 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-07 22:59:00,243 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:00,243 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-07 22:59:00,243 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:00,243 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:00,243 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:00,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-07 22:59:00,243 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:00,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-07 22:59:00,244 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,244 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-07 22:59:00,244 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-07 22:59:00,244 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:00,245 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:00,246 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-07 22:59:00,246 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-07 22:59:10,246 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-07 22:59:10,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-07 22:59:10,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-07 22:59:10,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,263 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:10,263 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:10,264 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-07 22:59:10,264 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-07 22:59:10,264 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,264 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,265 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:10,265 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,265 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:10,265 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:10,266 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,266 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-07 22:59:10,266 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,266 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,266 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-07 22:59:10,266 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,267 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,267 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,267 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-07 22:59:10,267 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:10,267 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-07 22:59:10,267 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-07 22:59:10,268 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-07 22:59:10,268 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:10,268 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. started... 2023-06-07 22:59:10,268 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing eb0570753fefb676a232a1a0e16d349a 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-07 22:59:10,281 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/12badbcdf06c472bacfac5afab9d8d2e 2023-06-07 22:59:10,290 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/12badbcdf06c472bacfac5afab9d8d2e as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/12badbcdf06c472bacfac5afab9d8d2e 2023-06-07 22:59:10,295 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/12badbcdf06c472bacfac5afab9d8d2e, entries=1, sequenceid=5, filesize=5.8 K 2023-06-07 22:59:10,296 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for eb0570753fefb676a232a1a0e16d349a in 28ms, sequenceid=5, compaction requested=false 2023-06-07 22:59:10,297 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:59:10,297 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:10,297 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-07 22:59:10,297 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-07 22:59:10,297 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,297 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-07 22:59:10,297 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-07 22:59:10,299 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,299 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,299 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,299 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:10,299 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:10,299 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,299 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-07 22:59:10,299 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:10,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:10,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:10,301 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-07 22:59:10,301 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@57d68ee5[Count = 0] remaining members to acquire global barrier 2023-06-07 22:59:10,301 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-07 22:59:10,301 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,303 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,303 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,303 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,304 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-07 22:59:10,304 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,304 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-07 22:59:10,304 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-07 22:59:10,304 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,41153,1686178729121' in zk 2023-06-07 22:59:10,306 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,306 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-07 22:59:10,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:10,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:10,306 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:10,306 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-07 22:59:10,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:10,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:10,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,308 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:10,308 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,308 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,308 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,41153,1686178729121': 2023-06-07 22:59:10,309 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-07 22:59:10,309 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-07 22:59:10,309 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-07 22:59:10,309 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-07 22:59:10,309 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,309 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-07 22:59:10,312 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,312 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:10,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:10,312 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,312 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:10,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:10,313 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:10,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:10,313 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,314 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:10,314 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,314 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,314 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,315 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:10,315 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,315 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,319 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,319 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,319 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:10,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:10,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,319 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:10,320 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:10,320 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:10,320 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-07 22:59:10,320 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-07 22:59:20,320 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-07 22:59:20,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-07 22:59:20,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-07 22:59:20,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-07 22:59:20,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,330 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:20,330 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:20,331 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-07 22:59:20,331 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-07 22:59:20,331 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,331 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,332 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:20,332 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,332 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:20,333 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:20,333 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,333 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-07 22:59:20,333 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,333 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,334 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-07 22:59:20,334 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,334 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,334 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-07 22:59:20,334 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,334 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-07 22:59:20,334 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:20,334 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-07 22:59:20,335 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-07 22:59:20,335 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-07 22:59:20,335 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:20,335 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. started... 2023-06-07 22:59:20,335 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing eb0570753fefb676a232a1a0e16d349a 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-07 22:59:20,345 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/a519669d72174da1bd4c0f7f765b8298 2023-06-07 22:59:20,352 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/a519669d72174da1bd4c0f7f765b8298 as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/a519669d72174da1bd4c0f7f765b8298 2023-06-07 22:59:20,359 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/a519669d72174da1bd4c0f7f765b8298, entries=1, sequenceid=9, filesize=5.8 K 2023-06-07 22:59:20,360 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for eb0570753fefb676a232a1a0e16d349a in 25ms, sequenceid=9, compaction requested=false 2023-06-07 22:59:20,360 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:59:20,360 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:20,360 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-07 22:59:20,360 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-07 22:59:20,360 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,360 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-07 22:59:20,360 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-07 22:59:20,362 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,362 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,362 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,362 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:20,362 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:20,362 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,362 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:20,362 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-07 22:59:20,363 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:20,363 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,363 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,364 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:20,364 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-07 22:59:20,364 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@12d40a7b[Count = 0] remaining members to acquire global barrier 2023-06-07 22:59:20,364 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-07 22:59:20,364 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,365 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,365 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,365 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,365 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-07 22:59:20,365 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-07 22:59:20,365 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,365 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-07 22:59:20,365 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,41153,1686178729121' in zk 2023-06-07 22:59:20,368 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,368 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-07 22:59:20,368 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,368 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:20,368 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:20,368 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:20,368 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-07 22:59:20,369 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:20,369 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:20,369 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,369 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,370 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:20,370 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,370 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,371 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,41153,1686178729121': 2023-06-07 22:59:20,371 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-07 22:59:20,371 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-07 22:59:20,371 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-07 22:59:20,371 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-07 22:59:20,371 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,371 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-07 22:59:20,372 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,372 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,372 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:20,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:20,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,372 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:20,373 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:20,373 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:20,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:20,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,374 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:20,374 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,374 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,374 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,375 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:20,375 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,376 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,377 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,378 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,378 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:20,378 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,378 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:20,378 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:20,378 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:20,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:20,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-07 22:59:20,378 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:20,378 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:20,378 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-07 22:59:20,378 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,379 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-07 22:59:20,379 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:20,379 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:20,379 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:20,379 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-07 22:59:20,379 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,379 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-07 22:59:30,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-07 22:59:30,393 INFO [Listener at localhost/46415] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178729503 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178770382 2023-06-07 22:59:30,393 DEBUG [Listener at localhost/46415] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46781,DS-b554bb39-25a8-436c-9176-ad3897a8db1c,DISK], DatanodeInfoWithStorage[127.0.0.1:46301,DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f,DISK]] 2023-06-07 22:59:30,393 DEBUG [Listener at localhost/46415] wal.AbstractFSWAL(716): hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178729503 is not closed yet, will try archiving it next time 2023-06-07 22:59:30,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-07 22:59:30,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-07 22:59:30,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,400 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:30,400 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:30,400 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-07 22:59:30,400 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-07 22:59:30,401 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,401 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,402 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,402 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:30,402 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:30,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:30,402 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,402 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-07 22:59:30,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-07 22:59:30,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,403 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,403 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-07 22:59:30,403 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,403 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-07 22:59:30,403 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:30,404 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-07 22:59:30,404 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-07 22:59:30,404 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-07 22:59:30,404 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:30,404 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. started... 2023-06-07 22:59:30,404 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing eb0570753fefb676a232a1a0e16d349a 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-07 22:59:30,417 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/b064ba97cf1e498390c3c31b04c9337e 2023-06-07 22:59:30,423 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/b064ba97cf1e498390c3c31b04c9337e as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/b064ba97cf1e498390c3c31b04c9337e 2023-06-07 22:59:30,429 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/b064ba97cf1e498390c3c31b04c9337e, entries=1, sequenceid=13, filesize=5.8 K 2023-06-07 22:59:30,430 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for eb0570753fefb676a232a1a0e16d349a in 26ms, sequenceid=13, compaction requested=true 2023-06-07 22:59:30,430 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:59:30,430 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:30,430 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-07 22:59:30,430 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-07 22:59:30,430 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,430 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-07 22:59:30,430 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-07 22:59:30,433 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,433 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,433 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,433 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:30,433 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:30,433 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,433 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-07 22:59:30,433 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:30,434 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:30,434 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,434 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,434 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:30,435 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-07 22:59:30,435 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@387c47f9[Count = 0] remaining members to acquire global barrier 2023-06-07 22:59:30,435 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-07 22:59:30,435 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,436 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,436 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,436 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,436 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-07 22:59:30,436 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-07 22:59:30,436 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,41153,1686178729121' in zk 2023-06-07 22:59:30,436 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,436 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-07 22:59:30,438 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-07 22:59:30,438 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,438 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:30,438 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,438 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-07 22:59:30,438 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:30,438 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:30,439 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:30,439 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:30,439 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,440 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,440 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:30,440 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,441 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,441 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,41153,1686178729121': 2023-06-07 22:59:30,441 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-07 22:59:30,441 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-07 22:59:30,441 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-07 22:59:30,441 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-07 22:59:30,441 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,441 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-07 22:59:30,443 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,443 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,443 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,443 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,443 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:30,443 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:30,443 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:30,443 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,443 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:30,443 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,443 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:30,444 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:30,444 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,444 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,451 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:30,451 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,451 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,451 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,452 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:30,452 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,452 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:30,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-07 22:59:30,460 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:30,460 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,460 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:30,460 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-07 22:59:30,461 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,461 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:30,461 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:30,461 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:30,461 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-07 22:59:30,461 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,461 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-07 22:59:40,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-07 22:59:40,462 DEBUG [Listener at localhost/46415] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 22:59:40,467 DEBUG [Listener at localhost/46415] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 22:59:40,467 DEBUG [Listener at localhost/46415] regionserver.HStore(1912): eb0570753fefb676a232a1a0e16d349a/info is initiating minor compaction (all files) 2023-06-07 22:59:40,468 INFO [Listener at localhost/46415] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:59:40,468 INFO [Listener at localhost/46415] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:40,468 INFO [Listener at localhost/46415] regionserver.HRegion(2259): Starting compaction of eb0570753fefb676a232a1a0e16d349a/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:40,468 INFO [Listener at localhost/46415] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/12badbcdf06c472bacfac5afab9d8d2e, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/a519669d72174da1bd4c0f7f765b8298, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/b064ba97cf1e498390c3c31b04c9337e] into tmpdir=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp, totalSize=17.4 K 2023-06-07 22:59:40,469 DEBUG [Listener at localhost/46415] compactions.Compactor(207): Compacting 12badbcdf06c472bacfac5afab9d8d2e, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1686178750256 2023-06-07 22:59:40,469 DEBUG [Listener at localhost/46415] compactions.Compactor(207): Compacting a519669d72174da1bd4c0f7f765b8298, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1686178760322 2023-06-07 22:59:40,470 DEBUG [Listener at localhost/46415] compactions.Compactor(207): Compacting b064ba97cf1e498390c3c31b04c9337e, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1686178770381 2023-06-07 22:59:40,483 INFO [Listener at localhost/46415] throttle.PressureAwareThroughputController(145): eb0570753fefb676a232a1a0e16d349a#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 22:59:40,496 DEBUG [Listener at localhost/46415] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/ce569d40779540f597c53b86923e3b7b as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/ce569d40779540f597c53b86923e3b7b 2023-06-07 22:59:40,502 INFO [Listener at localhost/46415] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in eb0570753fefb676a232a1a0e16d349a/info of eb0570753fefb676a232a1a0e16d349a into ce569d40779540f597c53b86923e3b7b(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 22:59:40,502 DEBUG [Listener at localhost/46415] regionserver.HRegion(2289): Compaction status journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:59:40,512 INFO [Listener at localhost/46415] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178770382 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178780503 2023-06-07 22:59:40,512 DEBUG [Listener at localhost/46415] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46301,DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f,DISK], DatanodeInfoWithStorage[127.0.0.1:46781,DS-b554bb39-25a8-436c-9176-ad3897a8db1c,DISK]] 2023-06-07 22:59:40,512 DEBUG [Listener at localhost/46415] wal.AbstractFSWAL(716): hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178770382 is not closed yet, will try archiving it next time 2023-06-07 22:59:40,512 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178729503 to hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/oldWALs/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178729503 2023-06-07 22:59:40,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-07 22:59:40,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-07 22:59:40,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,521 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:40,521 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:40,521 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-07 22:59:40,521 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-07 22:59:40,522 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,522 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,526 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,527 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:40,527 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:40,527 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:40,527 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,527 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-07 22:59:40,527 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,527 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-07 22:59:40,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,528 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,528 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-07 22:59:40,528 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,528 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-07 22:59:40,528 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-07 22:59:40,529 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-07 22:59:40,529 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-07 22:59:40,529 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-07 22:59:40,529 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:40,529 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. started... 2023-06-07 22:59:40,529 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing eb0570753fefb676a232a1a0e16d349a 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-07 22:59:40,538 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/5233105166584214827ace0ac90babc0 2023-06-07 22:59:40,543 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/5233105166584214827ace0ac90babc0 as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/5233105166584214827ace0ac90babc0 2023-06-07 22:59:40,549 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/5233105166584214827ace0ac90babc0, entries=1, sequenceid=18, filesize=5.8 K 2023-06-07 22:59:40,550 INFO [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for eb0570753fefb676a232a1a0e16d349a in 21ms, sequenceid=18, compaction requested=false 2023-06-07 22:59:40,550 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:59:40,550 DEBUG [rs(jenkins-hbase4.apache.org,41153,1686178729121)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:40,550 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-07 22:59:40,550 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-07 22:59:40,550 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,550 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-07 22:59:40,550 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-07 22:59:40,552 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,552 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:40,553 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,553 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-07 22:59:40,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:40,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:40,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:40,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:40,555 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,41153,1686178729121' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-07 22:59:40,555 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@238cbbb6[Count = 0] remaining members to acquire global barrier 2023-06-07 22:59:40,555 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-07 22:59:40,555 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,556 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,556 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,556 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,556 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-07 22:59:40,556 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-07 22:59:40,556 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,556 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-07 22:59:40,556 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,41153,1686178729121' in zk 2023-06-07 22:59:40,559 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-07 22:59:40,559 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,559 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:40,559 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,560 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:40,560 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:40,559 DEBUG [member: 'jenkins-hbase4.apache.org,41153,1686178729121' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-07 22:59:40,560 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:40,561 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:40,561 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,561 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,561 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:40,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,41153,1686178729121': 2023-06-07 22:59:40,562 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,41153,1686178729121' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-07 22:59:40,562 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-07 22:59:40,563 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-07 22:59:40,563 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-07 22:59:40,563 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,563 INFO [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-07 22:59:40,564 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,564 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-07 22:59:40,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-07 22:59:40,564 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,564 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:40,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-07 22:59:40,565 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:40,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:40,565 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-07 22:59:40,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,566 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,567 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-07 22:59:40,567 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,567 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,570 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-07 22:59:40,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-07 22:59:40,570 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:40,570 DEBUG [(jenkins-hbase4.apache.org,41055,1686178729081)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-07 22:59:40,570 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:40,570 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,571 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-07 22:59:40,571 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:40,571 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-07 22:59:40,571 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-07 22:59:40,571 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:40,571 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-07 22:59:50,571 DEBUG [Listener at localhost/46415] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-07 22:59:50,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41055] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-07 22:59:50,651 INFO [Listener at localhost/46415] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178780503 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178790639 2023-06-07 22:59:50,651 DEBUG [Listener at localhost/46415] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46781,DS-b554bb39-25a8-436c-9176-ad3897a8db1c,DISK], DatanodeInfoWithStorage[127.0.0.1:46301,DS-3c82fe63-e389-4ac7-93c7-52ad8d1ffb3f,DISK]] 2023-06-07 22:59:50,651 DEBUG [Listener at localhost/46415] wal.AbstractFSWAL(716): hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178780503 is not closed yet, will try archiving it next time 2023-06-07 22:59:50,651 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-07 22:59:50,651 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178770382 to hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/oldWALs/jenkins-hbase4.apache.org%2C41153%2C1686178729121.1686178770382 2023-06-07 22:59:50,651 INFO [Listener at localhost/46415] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-07 22:59:50,651 DEBUG [Listener at localhost/46415] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4326b091 to 127.0.0.1:63966 2023-06-07 22:59:50,653 DEBUG [Listener at localhost/46415] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:59:50,653 DEBUG [Listener at localhost/46415] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-07 22:59:50,653 DEBUG [Listener at localhost/46415] util.JVMClusterUtil(257): Found active master hash=591882234, stopped=false 2023-06-07 22:59:50,653 INFO [Listener at localhost/46415] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:59:50,655 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:59:50,655 INFO [Listener at localhost/46415] procedure2.ProcedureExecutor(629): Stopping 2023-06-07 22:59:50,655 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:50,656 DEBUG [Listener at localhost/46415] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b011774 to 127.0.0.1:63966 2023-06-07 22:59:50,656 DEBUG [Listener at localhost/46415] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:59:50,655 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 22:59:50,656 INFO [Listener at localhost/46415] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41153,1686178729121' ***** 2023-06-07 22:59:50,656 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:59:50,656 INFO [Listener at localhost/46415] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 22:59:50,657 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:59:50,657 INFO [RS:0;jenkins-hbase4:41153] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 22:59:50,657 INFO [RS:0;jenkins-hbase4:41153] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 22:59:50,657 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 22:59:50,657 INFO [RS:0;jenkins-hbase4:41153] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 22:59:50,657 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(3303): Received CLOSE for 4e68d662b1c7282eaff433c7362d7d3d 2023-06-07 22:59:50,658 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(3303): Received CLOSE for eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:59:50,658 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:50,658 DEBUG [RS:0;jenkins-hbase4:41153] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x26f3714c to 127.0.0.1:63966 2023-06-07 22:59:50,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4e68d662b1c7282eaff433c7362d7d3d, disabling compactions & flushes 2023-06-07 22:59:50,658 DEBUG [RS:0;jenkins-hbase4:41153] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:59:50,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:50,658 INFO [RS:0;jenkins-hbase4:41153] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 22:59:50,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:50,658 INFO [RS:0;jenkins-hbase4:41153] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 22:59:50,658 INFO [RS:0;jenkins-hbase4:41153] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 22:59:50,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. after waiting 0 ms 2023-06-07 22:59:50,658 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 22:59:50,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:50,659 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-07 22:59:50,659 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1478): Online Regions={4e68d662b1c7282eaff433c7362d7d3d=hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d., eb0570753fefb676a232a1a0e16d349a=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a., 1588230740=hbase:meta,,1.1588230740} 2023-06-07 22:59:50,659 DEBUG [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1504): Waiting on 1588230740, 4e68d662b1c7282eaff433c7362d7d3d, eb0570753fefb676a232a1a0e16d349a 2023-06-07 22:59:50,660 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:59:50,660 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:59:50,660 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:59:50,660 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:59:50,660 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:59:50,661 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-07 22:59:50,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/namespace/4e68d662b1c7282eaff433c7362d7d3d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-07 22:59:50,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:50,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4e68d662b1c7282eaff433c7362d7d3d: 2023-06-07 22:59:50,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686178729672.4e68d662b1c7282eaff433c7362d7d3d. 2023-06-07 22:59:50,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb0570753fefb676a232a1a0e16d349a, disabling compactions & flushes 2023-06-07 22:59:50,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:50,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:50,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. after waiting 0 ms 2023-06-07 22:59:50,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:50,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing eb0570753fefb676a232a1a0e16d349a 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-07 22:59:50,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/.tmp/info/2862e5a411c141d0b2e6fac9f9ffb5bb 2023-06-07 22:59:50,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/f8fbdfa2e95046598dbba1498e772c50 2023-06-07 22:59:50,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/.tmp/info/f8fbdfa2e95046598dbba1498e772c50 as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/f8fbdfa2e95046598dbba1498e772c50 2023-06-07 22:59:50,689 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/.tmp/table/a88f2ba939ad40f781d652115bf1ac23 2023-06-07 22:59:50,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/f8fbdfa2e95046598dbba1498e772c50, entries=1, sequenceid=22, filesize=5.8 K 2023-06-07 22:59:50,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for eb0570753fefb676a232a1a0e16d349a in 23ms, sequenceid=22, compaction requested=true 2023-06-07 22:59:50,693 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/12badbcdf06c472bacfac5afab9d8d2e, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/a519669d72174da1bd4c0f7f765b8298, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/b064ba97cf1e498390c3c31b04c9337e] to archive 2023-06-07 22:59:50,694 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-07 22:59:50,696 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/12badbcdf06c472bacfac5afab9d8d2e to hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/12badbcdf06c472bacfac5afab9d8d2e 2023-06-07 22:59:50,697 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/a519669d72174da1bd4c0f7f765b8298 to hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/a519669d72174da1bd4c0f7f765b8298 2023-06-07 22:59:50,698 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/.tmp/info/2862e5a411c141d0b2e6fac9f9ffb5bb as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/info/2862e5a411c141d0b2e6fac9f9ffb5bb 2023-06-07 22:59:50,699 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/b064ba97cf1e498390c3c31b04c9337e to hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/info/b064ba97cf1e498390c3c31b04c9337e 2023-06-07 22:59:50,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/eb0570753fefb676a232a1a0e16d349a/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-07 22:59:50,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:50,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb0570753fefb676a232a1a0e16d349a: 2023-06-07 22:59:50,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686178730156.eb0570753fefb676a232a1a0e16d349a. 2023-06-07 22:59:50,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/info/2862e5a411c141d0b2e6fac9f9ffb5bb, entries=20, sequenceid=14, filesize=7.6 K 2023-06-07 22:59:50,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/.tmp/table/a88f2ba939ad40f781d652115bf1ac23 as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/table/a88f2ba939ad40f781d652115bf1ac23 2023-06-07 22:59:50,713 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/table/a88f2ba939ad40f781d652115bf1ac23, entries=4, sequenceid=14, filesize=4.9 K 2023-06-07 22:59:50,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 54ms, sequenceid=14, compaction requested=false 2023-06-07 22:59:50,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-07 22:59:50,721 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-07 22:59:50,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:59:50,721 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:59:50,721 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-07 22:59:50,860 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41153,1686178729121; all regions closed. 2023-06-07 22:59:50,860 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:50,866 DEBUG [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/oldWALs 2023-06-07 22:59:50,866 INFO [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C41153%2C1686178729121.meta:.meta(num 1686178729617) 2023-06-07 22:59:50,867 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/WALs/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:50,872 DEBUG [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/oldWALs 2023-06-07 22:59:50,872 INFO [RS:0;jenkins-hbase4:41153] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C41153%2C1686178729121:(num 1686178790639) 2023-06-07 22:59:50,872 DEBUG [RS:0;jenkins-hbase4:41153] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:59:50,872 INFO [RS:0;jenkins-hbase4:41153] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:59:50,872 INFO [RS:0;jenkins-hbase4:41153] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-07 22:59:50,872 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:59:50,872 INFO [RS:0;jenkins-hbase4:41153] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41153 2023-06-07 22:59:50,876 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41153,1686178729121 2023-06-07 22:59:50,876 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:59:50,876 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:59:50,879 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41153,1686178729121] 2023-06-07 22:59:50,879 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41153,1686178729121; numProcessing=1 2023-06-07 22:59:50,880 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41153,1686178729121 already deleted, retry=false 2023-06-07 22:59:50,880 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41153,1686178729121 expired; onlineServers=0 2023-06-07 22:59:50,880 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41055,1686178729081' ***** 2023-06-07 22:59:50,880 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-07 22:59:50,880 DEBUG [M:0;jenkins-hbase4:41055] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2778e74a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:59:50,880 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:59:50,880 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41055,1686178729081; all regions closed. 2023-06-07 22:59:50,880 DEBUG [M:0;jenkins-hbase4:41055] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 22:59:50,881 DEBUG [M:0;jenkins-hbase4:41055] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-07 22:59:50,881 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-07 22:59:50,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178729264] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178729264,5,FailOnTimeoutGroup] 2023-06-07 22:59:50,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178729263] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178729263,5,FailOnTimeoutGroup] 2023-06-07 22:59:50,881 DEBUG [M:0;jenkins-hbase4:41055] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-07 22:59:50,882 INFO [M:0;jenkins-hbase4:41055] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-07 22:59:50,882 INFO [M:0;jenkins-hbase4:41055] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-07 22:59:50,882 INFO [M:0;jenkins-hbase4:41055] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-07 22:59:50,882 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-07 22:59:50,883 DEBUG [M:0;jenkins-hbase4:41055] master.HMaster(1512): Stopping service threads 2023-06-07 22:59:50,883 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:50,883 INFO [M:0;jenkins-hbase4:41055] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-07 22:59:50,883 ERROR [M:0;jenkins-hbase4:41055] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-07 22:59:50,883 INFO [M:0;jenkins-hbase4:41055] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-07 22:59:50,883 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-07 22:59:50,883 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:59:50,883 DEBUG [M:0;jenkins-hbase4:41055] zookeeper.ZKUtil(398): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-07 22:59:50,883 WARN [M:0;jenkins-hbase4:41055] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-07 22:59:50,883 INFO [M:0;jenkins-hbase4:41055] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-07 22:59:50,884 INFO [M:0;jenkins-hbase4:41055] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-07 22:59:50,884 DEBUG [M:0;jenkins-hbase4:41055] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:59:50,884 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:50,884 DEBUG [M:0;jenkins-hbase4:41055] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:50,884 DEBUG [M:0;jenkins-hbase4:41055] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:59:50,884 DEBUG [M:0;jenkins-hbase4:41055] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:50,884 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-06-07 22:59:50,895 INFO [M:0;jenkins-hbase4:41055] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/98f2d5f29a3846138c15b5d05db0fc45 2023-06-07 22:59:50,900 INFO [M:0;jenkins-hbase4:41055] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 98f2d5f29a3846138c15b5d05db0fc45 2023-06-07 22:59:50,901 DEBUG [M:0;jenkins-hbase4:41055] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/98f2d5f29a3846138c15b5d05db0fc45 as hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/98f2d5f29a3846138c15b5d05db0fc45 2023-06-07 22:59:50,907 INFO [M:0;jenkins-hbase4:41055] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 98f2d5f29a3846138c15b5d05db0fc45 2023-06-07 22:59:50,907 INFO [M:0;jenkins-hbase4:41055] regionserver.HStore(1080): Added hdfs://localhost:39075/user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/98f2d5f29a3846138c15b5d05db0fc45, entries=11, sequenceid=100, filesize=6.1 K 2023-06-07 22:59:50,908 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=100, compaction requested=false 2023-06-07 22:59:50,909 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:50,909 DEBUG [M:0;jenkins-hbase4:41055] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:59:50,909 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a57bdfdd-f612-72e9-f37a-d2b41eeafd53/MasterData/WALs/jenkins-hbase4.apache.org,41055,1686178729081 2023-06-07 22:59:50,913 INFO [M:0;jenkins-hbase4:41055] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-07 22:59:50,913 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 22:59:50,914 INFO [M:0;jenkins-hbase4:41055] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41055 2023-06-07 22:59:50,916 DEBUG [M:0;jenkins-hbase4:41055] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41055,1686178729081 already deleted, retry=false 2023-06-07 22:59:50,979 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:59:50,979 INFO [RS:0;jenkins-hbase4:41153] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41153,1686178729121; zookeeper connection closed. 2023-06-07 22:59:50,979 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): regionserver:41153-0x100a7840f6c0001, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:59:50,979 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@42546195] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@42546195 2023-06-07 22:59:50,979 INFO [Listener at localhost/46415] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-07 22:59:51,079 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:59:51,079 INFO [M:0;jenkins-hbase4:41055] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41055,1686178729081; zookeeper connection closed. 2023-06-07 22:59:51,079 DEBUG [Listener at localhost/46415-EventThread] zookeeper.ZKWatcher(600): master:41055-0x100a7840f6c0000, quorum=127.0.0.1:63966, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 22:59:51,080 WARN [Listener at localhost/46415] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:59:51,083 INFO [Listener at localhost/46415] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:59:51,188 WARN [BP-179752761-172.31.14.131-1686178728537 heartbeating to localhost/127.0.0.1:39075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:59:51,188 WARN [BP-179752761-172.31.14.131-1686178728537 heartbeating to localhost/127.0.0.1:39075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-179752761-172.31.14.131-1686178728537 (Datanode Uuid 2f19f854-08dd-4313-8002-35476ef314c9) service to localhost/127.0.0.1:39075 2023-06-07 22:59:51,188 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/dfs/data/data3/current/BP-179752761-172.31.14.131-1686178728537] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:59:51,189 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/dfs/data/data4/current/BP-179752761-172.31.14.131-1686178728537] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:59:51,190 WARN [Listener at localhost/46415] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 22:59:51,194 INFO [Listener at localhost/46415] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:59:51,298 WARN [BP-179752761-172.31.14.131-1686178728537 heartbeating to localhost/127.0.0.1:39075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 22:59:51,298 WARN [BP-179752761-172.31.14.131-1686178728537 heartbeating to localhost/127.0.0.1:39075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-179752761-172.31.14.131-1686178728537 (Datanode Uuid 2640536c-3547-408d-b0dd-c2ffcab63536) service to localhost/127.0.0.1:39075 2023-06-07 22:59:51,299 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/dfs/data/data1/current/BP-179752761-172.31.14.131-1686178728537] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:59:51,299 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/cluster_4f30fbaa-ad97-874f-6f71-18207940961f/dfs/data/data2/current/BP-179752761-172.31.14.131-1686178728537] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 22:59:51,313 INFO [Listener at localhost/46415] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 22:59:51,379 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-07 22:59:51,425 INFO [Listener at localhost/46415] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-07 22:59:51,442 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-07 22:59:51,452 INFO [Listener at localhost/46415] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=94 (was 88) - Thread LEAK? -, OpenFileDescriptor=497 (was 463) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=7 (was 20), ProcessCount=169 (was 170), AvailableMemoryMB=552 (was 644) 2023-06-07 22:59:51,460 INFO [Listener at localhost/46415] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=95, OpenFileDescriptor=497, MaxFileDescriptor=60000, SystemLoadAverage=7, ProcessCount=169, AvailableMemoryMB=552 2023-06-07 22:59:51,460 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-07 22:59:51,460 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/hadoop.log.dir so I do NOT create it in target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417 2023-06-07 22:59:51,460 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fcf8f00f-1a27-d550-077a-f38a51bf747d/hadoop.tmp.dir so I do NOT create it in target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417 2023-06-07 22:59:51,460 INFO [Listener at localhost/46415] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc, deleteOnExit=true 2023-06-07 22:59:51,460 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/test.cache.data in system properties and HBase conf 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/hadoop.tmp.dir in system properties and HBase conf 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/hadoop.log.dir in system properties and HBase conf 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-07 22:59:51,461 DEBUG [Listener at localhost/46415] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:59:51,461 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/nfs.dump.dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/java.io.tmpdir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 22:59:51,462 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-07 22:59:51,463 INFO [Listener at localhost/46415] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-07 22:59:51,464 WARN [Listener at localhost/46415] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:59:51,467 WARN [Listener at localhost/46415] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:59:51,467 WARN [Listener at localhost/46415] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:59:51,502 WARN [Listener at localhost/46415] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:59:51,504 INFO [Listener at localhost/46415] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:59:51,508 INFO [Listener at localhost/46415] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/java.io.tmpdir/Jetty_localhost_40371_hdfs____.q7m03p/webapp 2023-06-07 22:59:51,602 INFO [Listener at localhost/46415] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40371 2023-06-07 22:59:51,603 WARN [Listener at localhost/46415] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 22:59:51,606 WARN [Listener at localhost/46415] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 22:59:51,606 WARN [Listener at localhost/46415] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 22:59:51,642 WARN [Listener at localhost/33443] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:59:51,650 WARN [Listener at localhost/33443] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:59:51,652 WARN [Listener at localhost/33443] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:59:51,653 INFO [Listener at localhost/33443] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:59:51,658 INFO [Listener at localhost/33443] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/java.io.tmpdir/Jetty_localhost_42643_datanode____qtatsu/webapp 2023-06-07 22:59:51,749 INFO [Listener at localhost/33443] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42643 2023-06-07 22:59:51,755 WARN [Listener at localhost/46639] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:59:51,766 WARN [Listener at localhost/46639] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 22:59:51,768 WARN [Listener at localhost/46639] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 22:59:51,769 INFO [Listener at localhost/46639] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 22:59:51,772 INFO [Listener at localhost/46639] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/java.io.tmpdir/Jetty_localhost_40755_datanode____.kgpabi/webapp 2023-06-07 22:59:51,847 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e6602fd229cca11: Processing first storage report for DS-f58f2b4e-e295-495a-806b-e7588845dc70 from datanode abc03f61-6d32-40f5-852b-33b275a6bbbb 2023-06-07 22:59:51,847 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e6602fd229cca11: from storage DS-f58f2b4e-e295-495a-806b-e7588845dc70 node DatanodeRegistration(127.0.0.1:39723, datanodeUuid=abc03f61-6d32-40f5-852b-33b275a6bbbb, infoPort=41333, infoSecurePort=0, ipcPort=46639, storageInfo=lv=-57;cid=testClusterID;nsid=409911293;c=1686178791469), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:59:51,847 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e6602fd229cca11: Processing first storage report for DS-5e88ce0f-e223-41f0-a697-cdc3ec99907a from datanode abc03f61-6d32-40f5-852b-33b275a6bbbb 2023-06-07 22:59:51,847 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e6602fd229cca11: from storage DS-5e88ce0f-e223-41f0-a697-cdc3ec99907a node DatanodeRegistration(127.0.0.1:39723, datanodeUuid=abc03f61-6d32-40f5-852b-33b275a6bbbb, infoPort=41333, infoSecurePort=0, ipcPort=46639, storageInfo=lv=-57;cid=testClusterID;nsid=409911293;c=1686178791469), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:59:51,867 INFO [Listener at localhost/46639] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40755 2023-06-07 22:59:51,874 WARN [Listener at localhost/35697] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 22:59:51,973 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3f9f55ae09ab33b: Processing first storage report for DS-1e5b0ccb-8279-41c6-b420-4191875725f3 from datanode 75de98e3-2846-4a37-a740-387a91cd276c 2023-06-07 22:59:51,973 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3f9f55ae09ab33b: from storage DS-1e5b0ccb-8279-41c6-b420-4191875725f3 node DatanodeRegistration(127.0.0.1:36205, datanodeUuid=75de98e3-2846-4a37-a740-387a91cd276c, infoPort=39221, infoSecurePort=0, ipcPort=35697, storageInfo=lv=-57;cid=testClusterID;nsid=409911293;c=1686178791469), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:59:51,973 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3f9f55ae09ab33b: Processing first storage report for DS-f9313f3d-0832-45a3-b4b1-c767ec10c736 from datanode 75de98e3-2846-4a37-a740-387a91cd276c 2023-06-07 22:59:51,973 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3f9f55ae09ab33b: from storage DS-f9313f3d-0832-45a3-b4b1-c767ec10c736 node DatanodeRegistration(127.0.0.1:36205, datanodeUuid=75de98e3-2846-4a37-a740-387a91cd276c, infoPort=39221, infoSecurePort=0, ipcPort=35697, storageInfo=lv=-57;cid=testClusterID;nsid=409911293;c=1686178791469), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 22:59:51,983 DEBUG [Listener at localhost/35697] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417 2023-06-07 22:59:51,985 INFO [Listener at localhost/35697] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/zookeeper_0, clientPort=54282, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-07 22:59:51,985 INFO [Listener at localhost/35697] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54282 2023-06-07 22:59:51,986 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:51,987 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:51,999 INFO [Listener at localhost/35697] util.FSUtils(471): Created version file at hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db with version=8 2023-06-07 22:59:51,999 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase-staging 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:59:52,001 INFO [Listener at localhost/35697] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:59:52,003 INFO [Listener at localhost/35697] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44709 2023-06-07 22:59:52,003 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:52,004 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:52,005 INFO [Listener at localhost/35697] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44709 connecting to ZooKeeper ensemble=127.0.0.1:54282 2023-06-07 22:59:52,015 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:447090x0, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:59:52,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44709-0x100a78505330000 connected 2023-06-07 22:59:52,030 DEBUG [Listener at localhost/35697] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:59:52,031 DEBUG [Listener at localhost/35697] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:59:52,031 DEBUG [Listener at localhost/35697] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:59:52,031 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44709 2023-06-07 22:59:52,032 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44709 2023-06-07 22:59:52,032 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44709 2023-06-07 22:59:52,032 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44709 2023-06-07 22:59:52,032 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44709 2023-06-07 22:59:52,032 INFO [Listener at localhost/35697] master.HMaster(444): hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db, hbase.cluster.distributed=false 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 22:59:52,045 INFO [Listener at localhost/35697] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 22:59:52,047 INFO [Listener at localhost/35697] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46751 2023-06-07 22:59:52,047 INFO [Listener at localhost/35697] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 22:59:52,048 DEBUG [Listener at localhost/35697] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 22:59:52,048 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:52,049 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:52,050 INFO [Listener at localhost/35697] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46751 connecting to ZooKeeper ensemble=127.0.0.1:54282 2023-06-07 22:59:52,052 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:467510x0, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 22:59:52,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46751-0x100a78505330001 connected 2023-06-07 22:59:52,054 DEBUG [Listener at localhost/35697] zookeeper.ZKUtil(164): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 22:59:52,054 DEBUG [Listener at localhost/35697] zookeeper.ZKUtil(164): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 22:59:52,055 DEBUG [Listener at localhost/35697] zookeeper.ZKUtil(164): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 22:59:52,058 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46751 2023-06-07 22:59:52,059 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46751 2023-06-07 22:59:52,059 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46751 2023-06-07 22:59:52,059 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46751 2023-06-07 22:59:52,059 DEBUG [Listener at localhost/35697] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46751 2023-06-07 22:59:52,060 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,062 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:59:52,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,064 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:59:52,065 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 22:59:52,065 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:59:52,066 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 22:59:52,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44709,1686178792000 from backup master directory 2023-06-07 22:59:52,067 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,067 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 22:59:52,067 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:59:52,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/hbase.id with ID: 54f19e22-aae0-4de0-99a8-2b5d353b8428 2023-06-07 22:59:52,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:52,090 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x766f7a07 to 127.0.0.1:54282 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:59:52,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44896324, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:59:52,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:59:52,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-07 22:59:52,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:59:52,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store-tmp 2023-06-07 22:59:52,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:52,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 22:59:52,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:52,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:52,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 22:59:52,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:52,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 22:59:52,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:59:52,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/WALs/jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44709%2C1686178792000, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/WALs/jenkins-hbase4.apache.org,44709,1686178792000, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/oldWALs, maxLogs=10 2023-06-07 22:59:52,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/WALs/jenkins-hbase4.apache.org,44709,1686178792000/jenkins-hbase4.apache.org%2C44709%2C1686178792000.1686178792114 2023-06-07 22:59:52,119 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39723,DS-f58f2b4e-e295-495a-806b-e7588845dc70,DISK], DatanodeInfoWithStorage[127.0.0.1:36205,DS-1e5b0ccb-8279-41c6-b420-4191875725f3,DISK]] 2023-06-07 22:59:52,119 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:59:52,119 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:52,119 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:59:52,119 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:59:52,120 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:59:52,122 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-07 22:59:52,122 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-07 22:59:52,122 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:59:52,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:59:52,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 22:59:52,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:59:52,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=844801, jitterRate=0.07422058284282684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:59:52,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 22:59:52,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-07 22:59:52,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-07 22:59:52,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-07 22:59:52,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-07 22:59:52,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-07 22:59:52,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-07 22:59:52,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-07 22:59:52,134 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-07 22:59:52,135 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-07 22:59:52,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-07 22:59:52,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-07 22:59:52,146 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-07 22:59:52,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-07 22:59:52,146 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-07 22:59:52,150 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-07 22:59:52,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-07 22:59:52,151 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-07 22:59:52,152 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:59:52,152 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 22:59:52,152 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44709,1686178792000, sessionid=0x100a78505330000, setting cluster-up flag (Was=false) 2023-06-07 22:59:52,156 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-07 22:59:52,161 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,164 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,167 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-07 22:59:52,167 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:52,168 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.hbase-snapshot/.tmp 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:59:52,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686178822171 2023-06-07 22:59:52,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-07 22:59:52,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,172 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-07 22:59:52,172 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-07 22:59:52,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-07 22:59:52,173 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178792173,5,FailOnTimeoutGroup] 2023-06-07 22:59:52,173 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178792173,5,FailOnTimeoutGroup] 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,173 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:59:52,183 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:59:52,184 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 22:59:52,184 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db 2023-06-07 22:59:52,193 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:52,194 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:59:52,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info 2023-06-07 22:59:52,195 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:59:52,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:59:52,197 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:59:52,197 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:59:52,198 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,198 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:59:52,199 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/table 2023-06-07 22:59:52,200 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:59:52,200 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,201 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740 2023-06-07 22:59:52,201 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740 2023-06-07 22:59:52,203 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:59:52,204 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:59:52,206 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:59:52,206 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=836742, jitterRate=0.06397312879562378}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:59:52,206 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:59:52,206 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 22:59:52,206 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 22:59:52,206 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 22:59:52,206 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 22:59:52,206 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 22:59:52,207 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 22:59:52,207 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 22:59:52,208 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 22:59:52,208 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-07 22:59:52,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-07 22:59:52,209 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-07 22:59:52,211 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-07 22:59:52,261 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(951): ClusterId : 54f19e22-aae0-4de0-99a8-2b5d353b8428 2023-06-07 22:59:52,262 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 22:59:52,269 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 22:59:52,269 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 22:59:52,271 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 22:59:52,272 DEBUG [RS:0;jenkins-hbase4:46751] zookeeper.ReadOnlyZKClient(139): Connect 0x5ff9ef56 to 127.0.0.1:54282 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:59:52,275 DEBUG [RS:0;jenkins-hbase4:46751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@513e0ad9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:59:52,275 DEBUG [RS:0;jenkins-hbase4:46751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55018426, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 22:59:52,284 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46751 2023-06-07 22:59:52,284 INFO [RS:0;jenkins-hbase4:46751] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 22:59:52,284 INFO [RS:0;jenkins-hbase4:46751] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 22:59:52,284 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 22:59:52,284 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44709,1686178792000 with isa=jenkins-hbase4.apache.org/172.31.14.131:46751, startcode=1686178792045 2023-06-07 22:59:52,284 DEBUG [RS:0;jenkins-hbase4:46751] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 22:59:52,287 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58685, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 22:59:52,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,288 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db 2023-06-07 22:59:52,288 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33443 2023-06-07 22:59:52,288 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 22:59:52,290 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 22:59:52,290 DEBUG [RS:0;jenkins-hbase4:46751] zookeeper.ZKUtil(162): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,291 WARN [RS:0;jenkins-hbase4:46751] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 22:59:52,291 INFO [RS:0;jenkins-hbase4:46751] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:59:52,291 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1946): logDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,291 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46751,1686178792045] 2023-06-07 22:59:52,295 DEBUG [RS:0;jenkins-hbase4:46751] zookeeper.ZKUtil(162): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,295 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 22:59:52,296 INFO [RS:0;jenkins-hbase4:46751] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 22:59:52,299 INFO [RS:0;jenkins-hbase4:46751] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 22:59:52,299 INFO [RS:0;jenkins-hbase4:46751] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 22:59:52,299 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,299 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 22:59:52,300 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,300 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,300 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,300 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,300 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,301 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,301 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 22:59:52,301 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,301 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,301 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,301 DEBUG [RS:0;jenkins-hbase4:46751] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 22:59:52,303 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,303 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,303 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,315 INFO [RS:0;jenkins-hbase4:46751] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 22:59:52,315 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46751,1686178792045-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,325 INFO [RS:0;jenkins-hbase4:46751] regionserver.Replication(203): jenkins-hbase4.apache.org,46751,1686178792045 started 2023-06-07 22:59:52,325 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46751,1686178792045, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46751, sessionid=0x100a78505330001 2023-06-07 22:59:52,325 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 22:59:52,325 DEBUG [RS:0;jenkins-hbase4:46751] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,325 DEBUG [RS:0;jenkins-hbase4:46751] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46751,1686178792045' 2023-06-07 22:59:52,325 DEBUG [RS:0;jenkins-hbase4:46751] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46751,1686178792045' 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 22:59:52,326 DEBUG [RS:0;jenkins-hbase4:46751] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 22:59:52,327 DEBUG [RS:0;jenkins-hbase4:46751] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 22:59:52,327 INFO [RS:0;jenkins-hbase4:46751] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 22:59:52,327 INFO [RS:0;jenkins-hbase4:46751] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 22:59:52,361 DEBUG [jenkins-hbase4:44709] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-07 22:59:52,362 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46751,1686178792045, state=OPENING 2023-06-07 22:59:52,364 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-07 22:59:52,365 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,365 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:59:52,365 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46751,1686178792045}] 2023-06-07 22:59:52,428 INFO [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46751%2C1686178792045, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/oldWALs, maxLogs=32 2023-06-07 22:59:52,436 INFO [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178792429 2023-06-07 22:59:52,436 DEBUG [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39723,DS-f58f2b4e-e295-495a-806b-e7588845dc70,DISK], DatanodeInfoWithStorage[127.0.0.1:36205,DS-1e5b0ccb-8279-41c6-b420-4191875725f3,DISK]] 2023-06-07 22:59:52,519 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,519 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 22:59:52,523 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36530, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 22:59:52,526 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-07 22:59:52,526 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 22:59:52,528 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46751%2C1686178792045.meta, suffix=.meta, logDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/oldWALs, maxLogs=32 2023-06-07 22:59:52,535 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.meta.1686178792528.meta 2023-06-07 22:59:52,535 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36205,DS-1e5b0ccb-8279-41c6-b420-4191875725f3,DISK], DatanodeInfoWithStorage[127.0.0.1:39723,DS-f58f2b4e-e295-495a-806b-e7588845dc70,DISK]] 2023-06-07 22:59:52,535 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:59:52,535 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-07 22:59:52,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-07 22:59:52,536 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-07 22:59:52,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-07 22:59:52,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:52,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-07 22:59:52,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-07 22:59:52,537 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 22:59:52,538 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info 2023-06-07 22:59:52,538 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info 2023-06-07 22:59:52,538 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 22:59:52,539 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,539 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 22:59:52,540 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:59:52,540 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/rep_barrier 2023-06-07 22:59:52,540 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 22:59:52,541 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,541 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 22:59:52,541 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/table 2023-06-07 22:59:52,541 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/table 2023-06-07 22:59:52,542 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 22:59:52,542 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,543 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740 2023-06-07 22:59:52,544 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740 2023-06-07 22:59:52,546 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 22:59:52,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 22:59:52,548 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=737590, jitterRate=-0.06210581958293915}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 22:59:52,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 22:59:52,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686178792519 2023-06-07 22:59:52,553 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-07 22:59:52,554 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-07 22:59:52,554 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46751,1686178792045, state=OPEN 2023-06-07 22:59:52,556 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-07 22:59:52,556 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 22:59:52,559 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-07 22:59:52,559 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46751,1686178792045 in 191 msec 2023-06-07 22:59:52,561 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-07 22:59:52,561 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 351 msec 2023-06-07 22:59:52,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 394 msec 2023-06-07 22:59:52,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686178792563, completionTime=-1 2023-06-07 22:59:52,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-07 22:59:52,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-07 22:59:52,566 DEBUG [hconnection-0x59b46e56-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:59:52,568 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36546, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:59:52,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-07 22:59:52,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686178852569 2023-06-07 22:59:52,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686178912569 2023-06-07 22:59:52,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1686178792000-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1686178792000-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1686178792000-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44709, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-07 22:59:52,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 22:59:52,577 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-07 22:59:52,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-07 22:59:52,579 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:59:52,580 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:59:52,582 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,582 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef empty. 2023-06-07 22:59:52,583 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,583 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-07 22:59:52,594 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-07 22:59:52,595 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5a51b58b5743e1a308ae06e8c83cc4ef, NAME => 'hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp 2023-06-07 22:59:52,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:52,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5a51b58b5743e1a308ae06e8c83cc4ef, disabling compactions & flushes 2023-06-07 22:59:52,603 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. after waiting 0 ms 2023-06-07 22:59:52,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,603 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5a51b58b5743e1a308ae06e8c83cc4ef: 2023-06-07 22:59:52,605 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:59:52,606 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178792606"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178792606"}]},"ts":"1686178792606"} 2023-06-07 22:59:52,609 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:59:52,610 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:59:52,610 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178792610"}]},"ts":"1686178792610"} 2023-06-07 22:59:52,611 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-07 22:59:52,617 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5a51b58b5743e1a308ae06e8c83cc4ef, ASSIGN}] 2023-06-07 22:59:52,619 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5a51b58b5743e1a308ae06e8c83cc4ef, ASSIGN 2023-06-07 22:59:52,619 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5a51b58b5743e1a308ae06e8c83cc4ef, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46751,1686178792045; forceNewPlan=false, retain=false 2023-06-07 22:59:52,771 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5a51b58b5743e1a308ae06e8c83cc4ef, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,771 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178792771"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178792771"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178792771"}]},"ts":"1686178792771"} 2023-06-07 22:59:52,773 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 5a51b58b5743e1a308ae06e8c83cc4ef, server=jenkins-hbase4.apache.org,46751,1686178792045}] 2023-06-07 22:59:52,929 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5a51b58b5743e1a308ae06e8c83cc4ef, NAME => 'hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:59:52,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:52,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,930 INFO [StoreOpener-5a51b58b5743e1a308ae06e8c83cc4ef-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,932 DEBUG [StoreOpener-5a51b58b5743e1a308ae06e8c83cc4ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/info 2023-06-07 22:59:52,932 DEBUG [StoreOpener-5a51b58b5743e1a308ae06e8c83cc4ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/info 2023-06-07 22:59:52,932 INFO [StoreOpener-5a51b58b5743e1a308ae06e8c83cc4ef-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5a51b58b5743e1a308ae06e8c83cc4ef columnFamilyName info 2023-06-07 22:59:52,932 INFO [StoreOpener-5a51b58b5743e1a308ae06e8c83cc4ef-1] regionserver.HStore(310): Store=5a51b58b5743e1a308ae06e8c83cc4ef/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:52,933 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,934 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 22:59:52,938 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:59:52,939 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5a51b58b5743e1a308ae06e8c83cc4ef; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835176, jitterRate=0.06198123097419739}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:59:52,939 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5a51b58b5743e1a308ae06e8c83cc4ef: 2023-06-07 22:59:52,941 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef., pid=6, masterSystemTime=1686178792925 2023-06-07 22:59:52,943 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,943 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 22:59:52,943 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5a51b58b5743e1a308ae06e8c83cc4ef, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:52,944 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178792943"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178792943"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178792943"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178792943"}]},"ts":"1686178792943"} 2023-06-07 22:59:52,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-07 22:59:52,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 5a51b58b5743e1a308ae06e8c83cc4ef, server=jenkins-hbase4.apache.org,46751,1686178792045 in 172 msec 2023-06-07 22:59:52,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-07 22:59:52,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5a51b58b5743e1a308ae06e8c83cc4ef, ASSIGN in 330 msec 2023-06-07 22:59:52,950 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:59:52,951 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178792950"}]},"ts":"1686178792950"} 2023-06-07 22:59:52,952 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-07 22:59:52,958 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:59:52,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 382 msec 2023-06-07 22:59:52,978 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-07 22:59:52,980 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:59:52,980 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:52,983 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-07 22:59:52,990 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:59:52,994 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-06-07 22:59:53,005 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-07 22:59:53,013 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 22:59:53,016 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-07 22:59:53,028 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-07 22:59:53,031 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-07 22:59:53,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.964sec 2023-06-07 22:59:53,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-07 22:59:53,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-07 22:59:53,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-07 22:59:53,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1686178792000-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-07 22:59:53,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1686178792000-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-07 22:59:53,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-07 22:59:53,062 DEBUG [Listener at localhost/35697] zookeeper.ReadOnlyZKClient(139): Connect 0x7c94c1cb to 127.0.0.1:54282 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 22:59:53,066 DEBUG [Listener at localhost/35697] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d937241, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 22:59:53,068 DEBUG [hconnection-0x205c6710-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 22:59:53,070 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36562, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 22:59:53,071 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 22:59:53,071 INFO [Listener at localhost/35697] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 22:59:53,080 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-07 22:59:53,080 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 22:59:53,081 INFO [Listener at localhost/35697] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-07 22:59:53,082 DEBUG [Listener at localhost/35697] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-07 22:59:53,087 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39246, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-07 22:59:53,089 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-07 22:59:53,089 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-07 22:59:53,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 22:59:53,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-07 22:59:53,092 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 22:59:53,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-07 22:59:53,093 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 22:59:53,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 22:59:53,095 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,095 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4 empty. 2023-06-07 22:59:53,096 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,096 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-07 22:59:53,106 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-07 22:59:53,107 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c8eabbbdd1a3cbfe44b966f5f23e08d4, NAME => 'TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/.tmp 2023-06-07 22:59:53,113 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:53,114 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing c8eabbbdd1a3cbfe44b966f5f23e08d4, disabling compactions & flushes 2023-06-07 22:59:53,114 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,114 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,114 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. after waiting 0 ms 2023-06-07 22:59:53,114 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,114 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,114 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 22:59:53,116 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 22:59:53,117 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686178793116"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178793116"}]},"ts":"1686178793116"} 2023-06-07 22:59:53,118 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 22:59:53,119 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 22:59:53,119 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178793119"}]},"ts":"1686178793119"} 2023-06-07 22:59:53,120 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-07 22:59:53,123 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, ASSIGN}] 2023-06-07 22:59:53,124 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, ASSIGN 2023-06-07 22:59:53,125 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46751,1686178792045; forceNewPlan=false, retain=false 2023-06-07 22:59:53,276 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c8eabbbdd1a3cbfe44b966f5f23e08d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:53,276 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686178793276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178793276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178793276"}]},"ts":"1686178793276"} 2023-06-07 22:59:53,278 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045}] 2023-06-07 22:59:53,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8eabbbdd1a3cbfe44b966f5f23e08d4, NAME => 'TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.', STARTKEY => '', ENDKEY => ''} 2023-06-07 22:59:53,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 22:59:53,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,436 INFO [StoreOpener-c8eabbbdd1a3cbfe44b966f5f23e08d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,437 DEBUG [StoreOpener-c8eabbbdd1a3cbfe44b966f5f23e08d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info 2023-06-07 22:59:53,437 DEBUG [StoreOpener-c8eabbbdd1a3cbfe44b966f5f23e08d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info 2023-06-07 22:59:53,438 INFO [StoreOpener-c8eabbbdd1a3cbfe44b966f5f23e08d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8eabbbdd1a3cbfe44b966f5f23e08d4 columnFamilyName info 2023-06-07 22:59:53,438 INFO [StoreOpener-c8eabbbdd1a3cbfe44b966f5f23e08d4-1] regionserver.HStore(310): Store=c8eabbbdd1a3cbfe44b966f5f23e08d4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 22:59:53,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 22:59:53,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 22:59:53,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8eabbbdd1a3cbfe44b966f5f23e08d4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=859809, jitterRate=0.09330381453037262}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 22:59:53,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 22:59:53,445 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4., pid=11, masterSystemTime=1686178793431 2023-06-07 22:59:53,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 22:59:53,447 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c8eabbbdd1a3cbfe44b966f5f23e08d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 22:59:53,447 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686178793447"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178793447"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178793447"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178793447"}]},"ts":"1686178793447"} 2023-06-07 22:59:53,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-07 22:59:53,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045 in 171 msec 2023-06-07 22:59:53,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-07 22:59:53,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, ASSIGN in 328 msec 2023-06-07 22:59:53,454 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 22:59:53,454 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178793454"}]},"ts":"1686178793454"} 2023-06-07 22:59:53,455 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-07 22:59:53,457 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 22:59:53,459 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 368 msec 2023-06-07 22:59:56,211 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-07 22:59:58,296 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-07 22:59:58,296 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-07 22:59:58,297 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-07 23:00:03,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-07 23:00:03,095 INFO [Listener at localhost/35697] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-07 23:00:03,097 DEBUG [Listener at localhost/35697] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-07 23:00:03,097 DEBUG [Listener at localhost/35697] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:03,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:03,109 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:00:03,137 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-07 23:00:03,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] ipc.CallRunner(144): callId: 38 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36562 deadline: 1686178813137, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:03,522 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/3dc655bcac584143ac39feb1f8a479bb 2023-06-07 23:00:03,531 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/3dc655bcac584143ac39feb1f8a479bb as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/3dc655bcac584143ac39feb1f8a479bb 2023-06-07 23:00:03,537 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/3dc655bcac584143ac39feb1f8a479bb, entries=7, sequenceid=11, filesize=12.1 K 2023-06-07 23:00:03,538 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 429ms, sequenceid=11, compaction requested=false 2023-06-07 23:00:03,539 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:13,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:13,193 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-07 23:00:13,203 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/5285840749034954b9a0847665432d2d 2023-06-07 23:00:13,212 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/5285840749034954b9a0847665432d2d as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d 2023-06-07 23:00:13,217 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d, entries=23, sequenceid=37, filesize=29.0 K 2023-06-07 23:00:13,218 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=2.10 KB/2152 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 24ms, sequenceid=37, compaction requested=false 2023-06-07 23:00:13,218 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:13,218 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=41.1 K, sizeToCheck=16.0 K 2023-06-07 23:00:13,218 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:13,218 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d because midkey is the same as first or last row 2023-06-07 23:00:15,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:15,202 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:00:15,222 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=47 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/598dae4aa26c4c3da97810001e683ce2 2023-06-07 23:00:15,228 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/598dae4aa26c4c3da97810001e683ce2 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/598dae4aa26c4c3da97810001e683ce2 2023-06-07 23:00:15,233 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/598dae4aa26c4c3da97810001e683ce2, entries=7, sequenceid=47, filesize=12.1 K 2023-06-07 23:00:15,234 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 32ms, sequenceid=47, compaction requested=true 2023-06-07 23:00:15,234 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:15,234 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=53.2 K, sizeToCheck=16.0 K 2023-06-07 23:00:15,234 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:15,234 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d because midkey is the same as first or last row 2023-06-07 23:00:15,234 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:15,235 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:00:15,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:15,236 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-07 23:00:15,237 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 54449 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:00:15,238 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): c8eabbbdd1a3cbfe44b966f5f23e08d4/info is initiating minor compaction (all files) 2023-06-07 23:00:15,238 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c8eabbbdd1a3cbfe44b966f5f23e08d4/info in TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:15,238 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/3dc655bcac584143ac39feb1f8a479bb, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/598dae4aa26c4c3da97810001e683ce2] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp, totalSize=53.2 K 2023-06-07 23:00:15,238 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 3dc655bcac584143ac39feb1f8a479bb, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686178803100 2023-06-07 23:00:15,240 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 5285840749034954b9a0847665432d2d, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=37, earliestPutTs=1686178803110 2023-06-07 23:00:15,240 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 598dae4aa26c4c3da97810001e683ce2, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1686178813194 2023-06-07 23:00:15,255 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=72 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/db5865f503204e1cb695f1aeb7f3d27c 2023-06-07 23:00:15,258 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): c8eabbbdd1a3cbfe44b966f5f23e08d4#info#compaction#29 average throughput is 18.98 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:00:15,264 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/db5865f503204e1cb695f1aeb7f3d27c as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/db5865f503204e1cb695f1aeb7f3d27c 2023-06-07 23:00:15,275 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/db5865f503204e1cb695f1aeb7f3d27c, entries=22, sequenceid=72, filesize=27.9 K 2023-06-07 23:00:15,276 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=5.25 KB/5380 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 40ms, sequenceid=72, compaction requested=false 2023-06-07 23:00:15,276 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:15,276 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=81.1 K, sizeToCheck=16.0 K 2023-06-07 23:00:15,276 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:15,276 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d because midkey is the same as first or last row 2023-06-07 23:00:15,276 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 2023-06-07 23:00:15,282 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c8eabbbdd1a3cbfe44b966f5f23e08d4/info of c8eabbbdd1a3cbfe44b966f5f23e08d4 into c21f9403b3fb4fe7a0e3e7ce775e52c2(size=43.8 K), total size for store is 71.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:00:15,282 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:15,282 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4., storeName=c8eabbbdd1a3cbfe44b966f5f23e08d4/info, priority=13, startTime=1686178815234; duration=0sec 2023-06-07 23:00:15,283 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.7 K, sizeToCheck=16.0 K 2023-06-07 23:00:15,283 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:15,283 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 because midkey is the same as first or last row 2023-06-07 23:00:15,283 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:17,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,249 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:00:17,262 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=83 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/71385588d3264f259f3911d2db73dce4 2023-06-07 23:00:17,267 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/71385588d3264f259f3911d2db73dce4 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/71385588d3264f259f3911d2db73dce4 2023-06-07 23:00:17,273 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/71385588d3264f259f3911d2db73dce4, entries=7, sequenceid=83, filesize=12.1 K 2023-06-07 23:00:17,274 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 25ms, sequenceid=83, compaction requested=true 2023-06-07 23:00:17,274 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:17,274 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=83.8 K, sizeToCheck=16.0 K 2023-06-07 23:00:17,274 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:17,274 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 because midkey is the same as first or last row 2023-06-07 23:00:17,274 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:17,274 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:00:17,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,275 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-07 23:00:17,276 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 85841 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:00:17,276 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): c8eabbbdd1a3cbfe44b966f5f23e08d4/info is initiating minor compaction (all files) 2023-06-07 23:00:17,276 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c8eabbbdd1a3cbfe44b966f5f23e08d4/info in TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:17,276 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/db5865f503204e1cb695f1aeb7f3d27c, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/71385588d3264f259f3911d2db73dce4] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp, totalSize=83.8 K 2023-06-07 23:00:17,277 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting c21f9403b3fb4fe7a0e3e7ce775e52c2, keycount=37, bloomtype=ROW, size=43.8 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1686178803100 2023-06-07 23:00:17,277 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting db5865f503204e1cb695f1aeb7f3d27c, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=72, earliestPutTs=1686178815203 2023-06-07 23:00:17,277 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 71385588d3264f259f3911d2db73dce4, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1686178815236 2023-06-07 23:00:17,285 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-07 23:00:17,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] ipc.CallRunner(144): callId: 106 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36562 deadline: 1686178827284, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:17,286 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=109 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/2682407fa2134482a490f072f1f43816 2023-06-07 23:00:17,292 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): c8eabbbdd1a3cbfe44b966f5f23e08d4#info#compaction#32 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:00:17,297 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/2682407fa2134482a490f072f1f43816 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/2682407fa2134482a490f072f1f43816 2023-06-07 23:00:17,304 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/2682407fa2134482a490f072f1f43816, entries=23, sequenceid=109, filesize=29.0 K 2023-06-07 23:00:17,305 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=6.30 KB/6456 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 30ms, sequenceid=109, compaction requested=false 2023-06-07 23:00:17,305 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:17,305 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=112.8 K, sizeToCheck=16.0 K 2023-06-07 23:00:17,305 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:17,305 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 because midkey is the same as first or last row 2023-06-07 23:00:17,312 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/6e27e2b7c3724379883f0ab238cffebe as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe 2023-06-07 23:00:17,318 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c8eabbbdd1a3cbfe44b966f5f23e08d4/info of c8eabbbdd1a3cbfe44b966f5f23e08d4 into 6e27e2b7c3724379883f0ab238cffebe(size=74.6 K), total size for store is 103.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:00:17,318 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:17,318 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4., storeName=c8eabbbdd1a3cbfe44b966f5f23e08d4/info, priority=13, startTime=1686178817274; duration=0sec 2023-06-07 23:00:17,318 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=103.5 K, sizeToCheck=16.0 K 2023-06-07 23:00:17,318 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-07 23:00:17,319 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:17,319 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:17,320 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44709] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,46751,1686178792045, parent={ENCODED => c8eabbbdd1a3cbfe44b966f5f23e08d4, NAME => 'TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-07 23:00:17,327 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44709] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:17,334 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c8eabbbdd1a3cbfe44b966f5f23e08d4, daughterA=d7cbb10a403956efbbba3b6abbb01884, daughterB=64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:17,334 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c8eabbbdd1a3cbfe44b966f5f23e08d4, daughterA=d7cbb10a403956efbbba3b6abbb01884, daughterB=64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:17,334 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c8eabbbdd1a3cbfe44b966f5f23e08d4, daughterA=d7cbb10a403956efbbba3b6abbb01884, daughterB=64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:17,335 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c8eabbbdd1a3cbfe44b966f5f23e08d4, daughterA=d7cbb10a403956efbbba3b6abbb01884, daughterB=64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:17,343 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, UNASSIGN}] 2023-06-07 23:00:17,344 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, UNASSIGN 2023-06-07 23:00:17,345 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c8eabbbdd1a3cbfe44b966f5f23e08d4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:17,345 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686178817345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178817345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178817345"}]},"ts":"1686178817345"} 2023-06-07 23:00:17,347 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045}] 2023-06-07 23:00:17,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8eabbbdd1a3cbfe44b966f5f23e08d4, disabling compactions & flushes 2023-06-07 23:00:17,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:17,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:17,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. after waiting 0 ms 2023-06-07 23:00:17,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:17,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c8eabbbdd1a3cbfe44b966f5f23e08d4 1/1 column families, dataSize=6.30 KB heapSize=7 KB 2023-06-07 23:00:17,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.30 KB at sequenceid=119 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/1fc5d642391f4518a770b20134032c29 2023-06-07 23:00:17,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.tmp/info/1fc5d642391f4518a770b20134032c29 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/1fc5d642391f4518a770b20134032c29 2023-06-07 23:00:17,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/1fc5d642391f4518a770b20134032c29, entries=6, sequenceid=119, filesize=11.0 K 2023-06-07 23:00:17,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.30 KB/6456, heapSize ~6.98 KB/7152, currentSize=0 B/0 for c8eabbbdd1a3cbfe44b966f5f23e08d4 in 25ms, sequenceid=119, compaction requested=true 2023-06-07 23:00:17,536 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/3dc655bcac584143ac39feb1f8a479bb, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/598dae4aa26c4c3da97810001e683ce2, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/db5865f503204e1cb695f1aeb7f3d27c, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/71385588d3264f259f3911d2db73dce4] to archive 2023-06-07 23:00:17,537 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-07 23:00:17,538 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/3dc655bcac584143ac39feb1f8a479bb to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/3dc655bcac584143ac39feb1f8a479bb 2023-06-07 23:00:17,540 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/5285840749034954b9a0847665432d2d 2023-06-07 23:00:17,541 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/c21f9403b3fb4fe7a0e3e7ce775e52c2 2023-06-07 23:00:17,542 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/598dae4aa26c4c3da97810001e683ce2 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/598dae4aa26c4c3da97810001e683ce2 2023-06-07 23:00:17,543 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/db5865f503204e1cb695f1aeb7f3d27c to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/db5865f503204e1cb695f1aeb7f3d27c 2023-06-07 23:00:17,544 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/71385588d3264f259f3911d2db73dce4 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/71385588d3264f259f3911d2db73dce4 2023-06-07 23:00:17,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/recovered.edits/122.seqid, newMaxSeqId=122, maxSeqId=1 2023-06-07 23:00:17,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. 2023-06-07 23:00:17,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8eabbbdd1a3cbfe44b966f5f23e08d4: 2023-06-07 23:00:17,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,554 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c8eabbbdd1a3cbfe44b966f5f23e08d4, regionState=CLOSED 2023-06-07 23:00:17,554 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686178817554"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178817554"}]},"ts":"1686178817554"} 2023-06-07 23:00:17,558 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-07 23:00:17,558 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure c8eabbbdd1a3cbfe44b966f5f23e08d4, server=jenkins-hbase4.apache.org,46751,1686178792045 in 209 msec 2023-06-07 23:00:17,560 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-07 23:00:17,560 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, UNASSIGN in 215 msec 2023-06-07 23:00:17,571 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=c8eabbbdd1a3cbfe44b966f5f23e08d4, threads=3 2023-06-07 23:00:17,572 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/1fc5d642391f4518a770b20134032c29 for region: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,573 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/2682407fa2134482a490f072f1f43816 for region: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,573 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe for region: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,582 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/2682407fa2134482a490f072f1f43816, top=true 2023-06-07 23:00:17,582 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/1fc5d642391f4518a770b20134032c29, top=true 2023-06-07 23:00:17,587 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.splits/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816 for child: 64cd2307f225d9afb388082e250d5f99, parent: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,587 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/.splits/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29 for child: 64cd2307f225d9afb388082e250d5f99, parent: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,588 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/2682407fa2134482a490f072f1f43816 for region: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,588 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/1fc5d642391f4518a770b20134032c29 for region: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,609 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe for region: c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:00:17,609 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region c8eabbbdd1a3cbfe44b966f5f23e08d4 Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-06-07 23:00:17,633 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/recovered.edits/122.seqid, newMaxSeqId=122, maxSeqId=-1 2023-06-07 23:00:17,635 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/recovered.edits/122.seqid, newMaxSeqId=122, maxSeqId=-1 2023-06-07 23:00:17,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686178817638"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1686178817638"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1686178817638"}]},"ts":"1686178817638"} 2023-06-07 23:00:17,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686178817638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178817638"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178817638"}]},"ts":"1686178817638"} 2023-06-07 23:00:17,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686178817638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178817638"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178817638"}]},"ts":"1686178817638"} 2023-06-07 23:00:17,680 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46751] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-07 23:00:17,680 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-07 23:00:17,680 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-07 23:00:17,691 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d7cbb10a403956efbbba3b6abbb01884, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=64cd2307f225d9afb388082e250d5f99, ASSIGN}] 2023-06-07 23:00:17,692 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/.tmp/info/a1e6763023b74783bdd8adf5a7a2d054 2023-06-07 23:00:17,692 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d7cbb10a403956efbbba3b6abbb01884, ASSIGN 2023-06-07 23:00:17,692 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=64cd2307f225d9afb388082e250d5f99, ASSIGN 2023-06-07 23:00:17,693 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d7cbb10a403956efbbba3b6abbb01884, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,46751,1686178792045; forceNewPlan=false, retain=false 2023-06-07 23:00:17,693 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=64cd2307f225d9afb388082e250d5f99, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,46751,1686178792045; forceNewPlan=false, retain=false 2023-06-07 23:00:17,705 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/.tmp/table/ad020acc8d514fbc96d3ba9692f0a627 2023-06-07 23:00:17,710 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/.tmp/info/a1e6763023b74783bdd8adf5a7a2d054 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info/a1e6763023b74783bdd8adf5a7a2d054 2023-06-07 23:00:17,715 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info/a1e6763023b74783bdd8adf5a7a2d054, entries=29, sequenceid=17, filesize=8.6 K 2023-06-07 23:00:17,715 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/.tmp/table/ad020acc8d514fbc96d3ba9692f0a627 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/table/ad020acc8d514fbc96d3ba9692f0a627 2023-06-07 23:00:17,720 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/table/ad020acc8d514fbc96d3ba9692f0a627, entries=4, sequenceid=17, filesize=4.8 K 2023-06-07 23:00:17,720 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 40ms, sequenceid=17, compaction requested=false 2023-06-07 23:00:17,721 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-07 23:00:17,844 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=64cd2307f225d9afb388082e250d5f99, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:17,844 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=d7cbb10a403956efbbba3b6abbb01884, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:17,844 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686178817844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178817844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178817844"}]},"ts":"1686178817844"} 2023-06-07 23:00:17,844 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686178817844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178817844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178817844"}]},"ts":"1686178817844"} 2023-06-07 23:00:17,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 64cd2307f225d9afb388082e250d5f99, server=jenkins-hbase4.apache.org,46751,1686178792045}] 2023-06-07 23:00:17,847 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure d7cbb10a403956efbbba3b6abbb01884, server=jenkins-hbase4.apache.org,46751,1686178792045}] 2023-06-07 23:00:18,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:18,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64cd2307f225d9afb388082e250d5f99, NAME => 'TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-07 23:00:18,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:00:18,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,003 INFO [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,004 DEBUG [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info 2023-06-07 23:00:18,004 DEBUG [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info 2023-06-07 23:00:18,004 INFO [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64cd2307f225d9afb388082e250d5f99 columnFamilyName info 2023-06-07 23:00:18,014 DEBUG [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] regionserver.HStore(539): loaded hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4->hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe-top 2023-06-07 23:00:18,019 DEBUG [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] regionserver.HStore(539): loaded hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29 2023-06-07 23:00:18,023 DEBUG [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] regionserver.HStore(539): loaded hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816 2023-06-07 23:00:18,023 INFO [StoreOpener-64cd2307f225d9afb388082e250d5f99-1] regionserver.HStore(310): Store=64cd2307f225d9afb388082e250d5f99/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:00:18,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:18,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 64cd2307f225d9afb388082e250d5f99; next sequenceid=123; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=830159, jitterRate=0.055601879954338074}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 23:00:18,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:18,029 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., pid=17, masterSystemTime=1686178817998 2023-06-07 23:00:18,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:18,030 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:00:18,031 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:18,031 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): 64cd2307f225d9afb388082e250d5f99/info is initiating minor compaction (all files) 2023-06-07 23:00:18,031 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64cd2307f225d9afb388082e250d5f99/info in TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:18,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:18,032 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4->hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe-top, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp, totalSize=114.6 K 2023-06-07 23:00:18,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:18,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:00:18,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d7cbb10a403956efbbba3b6abbb01884, NAME => 'TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-07 23:00:18,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:00:18,032 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=64cd2307f225d9afb388082e250d5f99, regionState=OPEN, openSeqNum=123, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:18,032 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4, keycount=33, bloomtype=ROW, size=74.6 K, encoding=NONE, compression=NONE, seqNum=84, earliestPutTs=1686178803100 2023-06-07 23:00:18,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,032 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686178818032"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178818032"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178818032"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178818032"}]},"ts":"1686178818032"} 2023-06-07 23:00:18,033 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=109, earliestPutTs=1686178817250 2023-06-07 23:00:18,033 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29, keycount=6, bloomtype=ROW, size=11.0 K, encoding=NONE, compression=NONE, seqNum=119, earliestPutTs=1686178817276 2023-06-07 23:00:18,034 INFO [StoreOpener-d7cbb10a403956efbbba3b6abbb01884-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,035 DEBUG [StoreOpener-d7cbb10a403956efbbba3b6abbb01884-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info 2023-06-07 23:00:18,035 DEBUG [StoreOpener-d7cbb10a403956efbbba3b6abbb01884-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info 2023-06-07 23:00:18,035 INFO [StoreOpener-d7cbb10a403956efbbba3b6abbb01884-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d7cbb10a403956efbbba3b6abbb01884 columnFamilyName info 2023-06-07 23:00:18,036 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-06-07 23:00:18,036 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 64cd2307f225d9afb388082e250d5f99, server=jenkins-hbase4.apache.org,46751,1686178792045 in 188 msec 2023-06-07 23:00:18,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=64cd2307f225d9afb388082e250d5f99, ASSIGN in 346 msec 2023-06-07 23:00:18,043 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64cd2307f225d9afb388082e250d5f99#info#compaction#36 average throughput is 34.89 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:00:18,044 DEBUG [StoreOpener-d7cbb10a403956efbbba3b6abbb01884-1] regionserver.HStore(539): loaded hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4->hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe-bottom 2023-06-07 23:00:18,044 INFO [StoreOpener-d7cbb10a403956efbbba3b6abbb01884-1] regionserver.HStore(310): Store=d7cbb10a403956efbbba3b6abbb01884/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:00:18,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:00:18,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d7cbb10a403956efbbba3b6abbb01884; next sequenceid=123; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=867129, jitterRate=0.10261215269565582}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 23:00:18,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d7cbb10a403956efbbba3b6abbb01884: 2023-06-07 23:00:18,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884., pid=18, masterSystemTime=1686178817998 2023-06-07 23:00:18,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:18,052 DEBUG [RS:0;jenkins-hbase4:46751-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-07 23:00:18,052 INFO [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:00:18,052 DEBUG [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HStore(1912): d7cbb10a403956efbbba3b6abbb01884/info is initiating minor compaction (all files) 2023-06-07 23:00:18,053 INFO [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HRegion(2259): Starting compaction of d7cbb10a403956efbbba3b6abbb01884/info in TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:00:18,053 INFO [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4->hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe-bottom] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/.tmp, totalSize=74.6 K 2023-06-07 23:00:18,054 DEBUG [RS:0;jenkins-hbase4:46751-longCompactions-0] compactions.Compactor(207): Compacting 6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4, keycount=33, bloomtype=ROW, size=74.6 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1686178803100 2023-06-07 23:00:18,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:00:18,054 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:00:18,055 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=d7cbb10a403956efbbba3b6abbb01884, regionState=OPEN, openSeqNum=123, regionLocation=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:18,055 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686178818055"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178818055"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178818055"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178818055"}]},"ts":"1686178818055"} 2023-06-07 23:00:18,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-06-07 23:00:18,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure d7cbb10a403956efbbba3b6abbb01884, server=jenkins-hbase4.apache.org,46751,1686178792045 in 210 msec 2023-06-07 23:00:18,061 INFO [RS:0;jenkins-hbase4:46751-longCompactions-0] throttle.PressureAwareThroughputController(145): d7cbb10a403956efbbba3b6abbb01884#info#compaction#37 average throughput is 31.30 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:00:18,062 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-06-07 23:00:18,062 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d7cbb10a403956efbbba3b6abbb01884, ASSIGN in 370 msec 2023-06-07 23:00:18,063 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/b21a1531943446c781fd8c3b2cd0c4e1 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b21a1531943446c781fd8c3b2cd0c4e1 2023-06-07 23:00:18,063 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c8eabbbdd1a3cbfe44b966f5f23e08d4, daughterA=d7cbb10a403956efbbba3b6abbb01884, daughterB=64cd2307f225d9afb388082e250d5f99 in 735 msec 2023-06-07 23:00:18,071 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64cd2307f225d9afb388082e250d5f99/info of 64cd2307f225d9afb388082e250d5f99 into b21a1531943446c781fd8c3b2cd0c4e1(size=40.8 K), total size for store is 40.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:00:18,072 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:18,072 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., storeName=64cd2307f225d9afb388082e250d5f99/info, priority=13, startTime=1686178818029; duration=0sec 2023-06-07 23:00:18,072 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:18,079 DEBUG [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/.tmp/info/49dc2d2f7f0e487c98c716c28c06e695 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info/49dc2d2f7f0e487c98c716c28c06e695 2023-06-07 23:00:18,085 INFO [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in d7cbb10a403956efbbba3b6abbb01884/info of d7cbb10a403956efbbba3b6abbb01884 into 49dc2d2f7f0e487c98c716c28c06e695(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:00:18,085 DEBUG [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for d7cbb10a403956efbbba3b6abbb01884: 2023-06-07 23:00:18,085 INFO [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884., storeName=d7cbb10a403956efbbba3b6abbb01884/info, priority=15, startTime=1686178818050; duration=0sec 2023-06-07 23:00:18,085 DEBUG [RS:0;jenkins-hbase4:46751-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:23,107 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-07 23:00:27,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] ipc.CallRunner(144): callId: 108 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36562 deadline: 1686178837321, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1686178793089.c8eabbbdd1a3cbfe44b966f5f23e08d4. is not online on jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:38,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=3, created chunk count=13, reused chunk count=29, reuseRatio=69.05% 2023-06-07 23:00:38,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-07 23:00:45,470 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-07 23:00:49,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:49,521 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:00:49,543 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=133 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/09ff1efdde7d4531ba6bb30de3f923c1 2023-06-07 23:00:49,549 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/09ff1efdde7d4531ba6bb30de3f923c1 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/09ff1efdde7d4531ba6bb30de3f923c1 2023-06-07 23:00:49,555 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/09ff1efdde7d4531ba6bb30de3f923c1, entries=7, sequenceid=133, filesize=12.1 K 2023-06-07 23:00:49,556 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=21.02 KB/21520 for 64cd2307f225d9afb388082e250d5f99 in 35ms, sequenceid=133, compaction requested=false 2023-06-07 23:00:49,556 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:49,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:49,557 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-07 23:00:49,570 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=158 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/e3010e41e1e846eca16697383ef697a7 2023-06-07 23:00:49,577 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/e3010e41e1e846eca16697383ef697a7 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e3010e41e1e846eca16697383ef697a7 2023-06-07 23:00:49,583 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e3010e41e1e846eca16697383ef697a7, entries=22, sequenceid=158, filesize=27.9 K 2023-06-07 23:00:49,584 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=4.20 KB/4304 for 64cd2307f225d9afb388082e250d5f99 in 27ms, sequenceid=158, compaction requested=true 2023-06-07 23:00:49,584 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:49,584 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:49,584 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:00:49,585 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82797 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:00:49,585 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): 64cd2307f225d9afb388082e250d5f99/info is initiating minor compaction (all files) 2023-06-07 23:00:49,585 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64cd2307f225d9afb388082e250d5f99/info in TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:49,585 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b21a1531943446c781fd8c3b2cd0c4e1, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/09ff1efdde7d4531ba6bb30de3f923c1, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e3010e41e1e846eca16697383ef697a7] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp, totalSize=80.9 K 2023-06-07 23:00:49,586 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting b21a1531943446c781fd8c3b2cd0c4e1, keycount=34, bloomtype=ROW, size=40.8 K, encoding=NONE, compression=NONE, seqNum=119, earliestPutTs=1686178815241 2023-06-07 23:00:49,586 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 09ff1efdde7d4531ba6bb30de3f923c1, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=133, earliestPutTs=1686178847513 2023-06-07 23:00:49,587 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting e3010e41e1e846eca16697383ef697a7, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=158, earliestPutTs=1686178849522 2023-06-07 23:00:49,599 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64cd2307f225d9afb388082e250d5f99#info#compaction#40 average throughput is 32.32 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:00:49,615 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/4f1f136590724269b23b0518cef2589d as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/4f1f136590724269b23b0518cef2589d 2023-06-07 23:00:49,621 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64cd2307f225d9afb388082e250d5f99/info of 64cd2307f225d9afb388082e250d5f99 into 4f1f136590724269b23b0518cef2589d(size=71.6 K), total size for store is 71.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:00:49,621 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:49,622 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., storeName=64cd2307f225d9afb388082e250d5f99/info, priority=13, startTime=1686178849584; duration=0sec 2023-06-07 23:00:49,622 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:51,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:51,566 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:00:51,575 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=169 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/3b7a7fb139e14b4482e8ba52e25fadab 2023-06-07 23:00:51,581 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/3b7a7fb139e14b4482e8ba52e25fadab as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/3b7a7fb139e14b4482e8ba52e25fadab 2023-06-07 23:00:51,586 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/3b7a7fb139e14b4482e8ba52e25fadab, entries=7, sequenceid=169, filesize=12.1 K 2023-06-07 23:00:51,587 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 64cd2307f225d9afb388082e250d5f99 in 21ms, sequenceid=169, compaction requested=false 2023-06-07 23:00:51,587 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:51,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:51,588 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-07 23:00:51,605 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=191 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/194d0ba17c41477089d2531fdda1791a 2023-06-07 23:00:51,616 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/194d0ba17c41477089d2531fdda1791a as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/194d0ba17c41477089d2531fdda1791a 2023-06-07 23:00:51,621 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/194d0ba17c41477089d2531fdda1791a, entries=19, sequenceid=191, filesize=24.8 K 2023-06-07 23:00:51,622 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 64cd2307f225d9afb388082e250d5f99 in 34ms, sequenceid=191, compaction requested=true 2023-06-07 23:00:51,622 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:51,622 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:51,622 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:00:51,624 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 111068 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:00:51,624 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): 64cd2307f225d9afb388082e250d5f99/info is initiating minor compaction (all files) 2023-06-07 23:00:51,624 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64cd2307f225d9afb388082e250d5f99/info in TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:00:51,624 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/4f1f136590724269b23b0518cef2589d, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/3b7a7fb139e14b4482e8ba52e25fadab, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/194d0ba17c41477089d2531fdda1791a] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp, totalSize=108.5 K 2023-06-07 23:00:51,624 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 4f1f136590724269b23b0518cef2589d, keycount=63, bloomtype=ROW, size=71.6 K, encoding=NONE, compression=NONE, seqNum=158, earliestPutTs=1686178815241 2023-06-07 23:00:51,625 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 3b7a7fb139e14b4482e8ba52e25fadab, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=169, earliestPutTs=1686178849557 2023-06-07 23:00:51,625 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 194d0ba17c41477089d2531fdda1791a, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=191, earliestPutTs=1686178851566 2023-06-07 23:00:51,635 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64cd2307f225d9afb388082e250d5f99#info#compaction#43 average throughput is 91.33 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:00:51,652 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/e2a4ed661d6e4cdc949a937af5484a12 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e2a4ed661d6e4cdc949a937af5484a12 2023-06-07 23:00:51,657 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64cd2307f225d9afb388082e250d5f99/info of 64cd2307f225d9afb388082e250d5f99 into e2a4ed661d6e4cdc949a937af5484a12(size=99.1 K), total size for store is 99.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:00:51,657 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:00:51,658 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., storeName=64cd2307f225d9afb388082e250d5f99/info, priority=13, startTime=1686178851622; duration=0sec 2023-06-07 23:00:51,658 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:00:53,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:00:53,602 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-06-07 23:00:53,615 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=206 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/cc669bd6aca2411c904a5683ccef0c42 2023-06-07 23:00:53,621 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/cc669bd6aca2411c904a5683ccef0c42 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/cc669bd6aca2411c904a5683ccef0c42 2023-06-07 23:00:53,627 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/cc669bd6aca2411c904a5683ccef0c42, entries=11, sequenceid=206, filesize=16.3 K 2023-06-07 23:00:53,627 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=64cd2307f225d9afb388082e250d5f99, server=jenkins-hbase4.apache.org,46751,1686178792045 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-07 23:00:53,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] ipc.CallRunner(144): callId: 196 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36562 deadline: 1686178863627, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=64cd2307f225d9afb388082e250d5f99, server=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:00:53,627 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=18.91 KB/19368 for 64cd2307f225d9afb388082e250d5f99 in 25ms, sequenceid=206, compaction requested=false 2023-06-07 23:00:53,627 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:03,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:03,685 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-07 23:01:03,699 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=228 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/0ebce097783049d2ad8aad2e9ad367f1 2023-06-07 23:01:03,702 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=64cd2307f225d9afb388082e250d5f99, server=jenkins-hbase4.apache.org,46751,1686178792045 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-07 23:01:03,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] ipc.CallRunner(144): callId: 209 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36562 deadline: 1686178873702, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=64cd2307f225d9afb388082e250d5f99, server=jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:01:03,710 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/0ebce097783049d2ad8aad2e9ad367f1 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0ebce097783049d2ad8aad2e9ad367f1 2023-06-07 23:01:03,714 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0ebce097783049d2ad8aad2e9ad367f1, entries=19, sequenceid=228, filesize=24.8 K 2023-06-07 23:01:03,715 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 64cd2307f225d9afb388082e250d5f99 in 30ms, sequenceid=228, compaction requested=true 2023-06-07 23:01:03,715 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:03,715 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:01:03,715 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:01:03,717 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 143514 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:01:03,717 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): 64cd2307f225d9afb388082e250d5f99/info is initiating minor compaction (all files) 2023-06-07 23:01:03,717 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64cd2307f225d9afb388082e250d5f99/info in TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:03,717 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e2a4ed661d6e4cdc949a937af5484a12, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/cc669bd6aca2411c904a5683ccef0c42, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0ebce097783049d2ad8aad2e9ad367f1] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp, totalSize=140.2 K 2023-06-07 23:01:03,717 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting e2a4ed661d6e4cdc949a937af5484a12, keycount=89, bloomtype=ROW, size=99.1 K, encoding=NONE, compression=NONE, seqNum=191, earliestPutTs=1686178815241 2023-06-07 23:01:03,718 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting cc669bd6aca2411c904a5683ccef0c42, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=206, earliestPutTs=1686178851589 2023-06-07 23:01:03,718 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 0ebce097783049d2ad8aad2e9ad367f1, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=228, earliestPutTs=1686178853603 2023-06-07 23:01:03,727 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64cd2307f225d9afb388082e250d5f99#info#compaction#46 average throughput is 122.11 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:01:03,743 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/a8cbb8c6352e455eb4a21c696d4b4f5d as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a8cbb8c6352e455eb4a21c696d4b4f5d 2023-06-07 23:01:03,748 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64cd2307f225d9afb388082e250d5f99/info of 64cd2307f225d9afb388082e250d5f99 into a8cbb8c6352e455eb4a21c696d4b4f5d(size=130.9 K), total size for store is 130.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:01:03,749 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:03,749 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., storeName=64cd2307f225d9afb388082e250d5f99/info, priority=13, startTime=1686178863715; duration=0sec 2023-06-07 23:01:03,749 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:01:13,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:13,775 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-06-07 23:01:13,787 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=243 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/c2a76e29bf7e474183a6a6f1e457d68c 2023-06-07 23:01:13,793 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/c2a76e29bf7e474183a6a6f1e457d68c as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/c2a76e29bf7e474183a6a6f1e457d68c 2023-06-07 23:01:13,797 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/c2a76e29bf7e474183a6a6f1e457d68c, entries=11, sequenceid=243, filesize=16.3 K 2023-06-07 23:01:13,798 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=1.05 KB/1076 for 64cd2307f225d9afb388082e250d5f99 in 23ms, sequenceid=243, compaction requested=false 2023-06-07 23:01:13,798 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:15,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:15,783 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:01:15,798 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=253 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/a512f8ac6c02422793918e4edb2cc379 2023-06-07 23:01:15,804 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/a512f8ac6c02422793918e4edb2cc379 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a512f8ac6c02422793918e4edb2cc379 2023-06-07 23:01:15,809 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a512f8ac6c02422793918e4edb2cc379, entries=7, sequenceid=253, filesize=12.1 K 2023-06-07 23:01:15,810 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 64cd2307f225d9afb388082e250d5f99 in 27ms, sequenceid=253, compaction requested=true 2023-06-07 23:01:15,810 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:15,810 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:01:15,810 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:01:15,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:15,811 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-07 23:01:15,811 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 163136 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:01:15,812 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): 64cd2307f225d9afb388082e250d5f99/info is initiating minor compaction (all files) 2023-06-07 23:01:15,812 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64cd2307f225d9afb388082e250d5f99/info in TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:15,812 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a8cbb8c6352e455eb4a21c696d4b4f5d, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/c2a76e29bf7e474183a6a6f1e457d68c, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a512f8ac6c02422793918e4edb2cc379] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp, totalSize=159.3 K 2023-06-07 23:01:15,812 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting a8cbb8c6352e455eb4a21c696d4b4f5d, keycount=119, bloomtype=ROW, size=130.9 K, encoding=NONE, compression=NONE, seqNum=228, earliestPutTs=1686178815241 2023-06-07 23:01:15,813 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting c2a76e29bf7e474183a6a6f1e457d68c, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=243, earliestPutTs=1686178863686 2023-06-07 23:01:15,813 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting a512f8ac6c02422793918e4edb2cc379, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1686178873776 2023-06-07 23:01:15,827 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=279 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/8bb808599a57428db964d5ec360b3279 2023-06-07 23:01:15,830 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64cd2307f225d9afb388082e250d5f99#info#compaction#50 average throughput is 70.29 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:01:15,832 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/8bb808599a57428db964d5ec360b3279 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/8bb808599a57428db964d5ec360b3279 2023-06-07 23:01:15,840 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/8bb808599a57428db964d5ec360b3279, entries=23, sequenceid=279, filesize=29 K 2023-06-07 23:01:15,841 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=3.15 KB/3228 for 64cd2307f225d9afb388082e250d5f99 in 30ms, sequenceid=279, compaction requested=false 2023-06-07 23:01:15,841 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:15,845 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/d46833d0abf545a7a5b0509187119de3 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/d46833d0abf545a7a5b0509187119de3 2023-06-07 23:01:15,850 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64cd2307f225d9afb388082e250d5f99/info of 64cd2307f225d9afb388082e250d5f99 into d46833d0abf545a7a5b0509187119de3(size=150.0 K), total size for store is 179.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:01:15,850 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:15,850 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., storeName=64cd2307f225d9afb388082e250d5f99/info, priority=13, startTime=1686178875810; duration=0sec 2023-06-07 23:01:15,850 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:01:17,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:17,819 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-07 23:01:17,837 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=290 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/b70ea6f1496e4c65a3f49d2cb8cb4f98 2023-06-07 23:01:17,843 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/b70ea6f1496e4c65a3f49d2cb8cb4f98 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b70ea6f1496e4c65a3f49d2cb8cb4f98 2023-06-07 23:01:17,849 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b70ea6f1496e4c65a3f49d2cb8cb4f98, entries=7, sequenceid=290, filesize=12.1 K 2023-06-07 23:01:17,850 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 64cd2307f225d9afb388082e250d5f99 in 31ms, sequenceid=290, compaction requested=true 2023-06-07 23:01:17,850 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:17,850 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:01:17,850 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-07 23:01:17,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46751] regionserver.HRegion(9158): Flush requested on 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:17,851 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-07 23:01:17,851 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 195700 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-07 23:01:17,851 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1912): 64cd2307f225d9afb388082e250d5f99/info is initiating minor compaction (all files) 2023-06-07 23:01:17,851 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64cd2307f225d9afb388082e250d5f99/info in TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:17,852 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/d46833d0abf545a7a5b0509187119de3, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/8bb808599a57428db964d5ec360b3279, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b70ea6f1496e4c65a3f49d2cb8cb4f98] into tmpdir=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp, totalSize=191.1 K 2023-06-07 23:01:17,852 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting d46833d0abf545a7a5b0509187119de3, keycount=137, bloomtype=ROW, size=150.0 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1686178815241 2023-06-07 23:01:17,852 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting 8bb808599a57428db964d5ec360b3279, keycount=23, bloomtype=ROW, size=29 K, encoding=NONE, compression=NONE, seqNum=279, earliestPutTs=1686178875784 2023-06-07 23:01:17,853 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] compactions.Compactor(207): Compacting b70ea6f1496e4c65a3f49d2cb8cb4f98, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=290, earliestPutTs=1686178875811 2023-06-07 23:01:17,872 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64cd2307f225d9afb388082e250d5f99#info#compaction#53 average throughput is 42.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-07 23:01:17,876 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=316 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/fa6bb46b36614f61b3f9cc5634ab4bd7 2023-06-07 23:01:17,888 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/fa6bb46b36614f61b3f9cc5634ab4bd7 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/fa6bb46b36614f61b3f9cc5634ab4bd7 2023-06-07 23:01:17,890 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/0dabf6e6e47544eb9331c8ea4ab08531 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0dabf6e6e47544eb9331c8ea4ab08531 2023-06-07 23:01:17,893 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/fa6bb46b36614f61b3f9cc5634ab4bd7, entries=23, sequenceid=316, filesize=29.0 K 2023-06-07 23:01:17,894 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=5.25 KB/5380 for 64cd2307f225d9afb388082e250d5f99 in 43ms, sequenceid=316, compaction requested=false 2023-06-07 23:01:17,894 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:17,896 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64cd2307f225d9afb388082e250d5f99/info of 64cd2307f225d9afb388082e250d5f99 into 0dabf6e6e47544eb9331c8ea4ab08531(size=181.7 K), total size for store is 210.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-07 23:01:17,896 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:17,896 INFO [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., storeName=64cd2307f225d9afb388082e250d5f99/info, priority=13, startTime=1686178877850; duration=0sec 2023-06-07 23:01:17,896 DEBUG [RS:0;jenkins-hbase4:46751-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-07 23:01:19,858 INFO [Listener at localhost/35697] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-07 23:01:19,874 INFO [Listener at localhost/35697] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178792429 with entries=308, filesize=306.60 KB; new WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178879859 2023-06-07 23:01:19,874 DEBUG [Listener at localhost/35697] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36205,DS-1e5b0ccb-8279-41c6-b420-4191875725f3,DISK], DatanodeInfoWithStorage[127.0.0.1:39723,DS-f58f2b4e-e295-495a-806b-e7588845dc70,DISK]] 2023-06-07 23:01:19,874 DEBUG [Listener at localhost/35697] wal.AbstractFSWAL(716): hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178792429 is not closed yet, will try archiving it next time 2023-06-07 23:01:19,880 INFO [Listener at localhost/35697] regionserver.HRegion(2745): Flushing 64cd2307f225d9afb388082e250d5f99 1/1 column families, dataSize=5.25 KB heapSize=5.88 KB 2023-06-07 23:01:19,888 INFO [Listener at localhost/35697] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.25 KB at sequenceid=325 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/996aa89cb8db4f70983e0c6ba64501e1 2023-06-07 23:01:19,893 DEBUG [Listener at localhost/35697] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/.tmp/info/996aa89cb8db4f70983e0c6ba64501e1 as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/996aa89cb8db4f70983e0c6ba64501e1 2023-06-07 23:01:19,897 INFO [Listener at localhost/35697] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/996aa89cb8db4f70983e0c6ba64501e1, entries=5, sequenceid=325, filesize=10.0 K 2023-06-07 23:01:19,898 INFO [Listener at localhost/35697] regionserver.HRegion(2948): Finished flush of dataSize ~5.25 KB/5380, heapSize ~5.86 KB/6000, currentSize=0 B/0 for 64cd2307f225d9afb388082e250d5f99 in 18ms, sequenceid=325, compaction requested=true 2023-06-07 23:01:19,898 DEBUG [Listener at localhost/35697] regionserver.HRegion(2446): Flush status journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:19,898 DEBUG [Listener at localhost/35697] regionserver.HRegion(2446): Flush status journal for d7cbb10a403956efbbba3b6abbb01884: 2023-06-07 23:01:19,898 INFO [Listener at localhost/35697] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-07 23:01:19,906 INFO [Listener at localhost/35697] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/.tmp/info/afe885f8dc104a16b8c2d84287c1f5bf 2023-06-07 23:01:19,911 DEBUG [Listener at localhost/35697] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/.tmp/info/afe885f8dc104a16b8c2d84287c1f5bf as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info/afe885f8dc104a16b8c2d84287c1f5bf 2023-06-07 23:01:19,915 INFO [Listener at localhost/35697] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/info/afe885f8dc104a16b8c2d84287c1f5bf, entries=16, sequenceid=24, filesize=7.0 K 2023-06-07 23:01:19,916 INFO [Listener at localhost/35697] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 18ms, sequenceid=24, compaction requested=false 2023-06-07 23:01:19,916 DEBUG [Listener at localhost/35697] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-07 23:01:19,916 INFO [Listener at localhost/35697] regionserver.HRegion(2745): Flushing 5a51b58b5743e1a308ae06e8c83cc4ef 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-07 23:01:19,929 INFO [Listener at localhost/35697] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/.tmp/info/0426687b86e341a78b9e35dfc1afb02f 2023-06-07 23:01:19,934 DEBUG [Listener at localhost/35697] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/.tmp/info/0426687b86e341a78b9e35dfc1afb02f as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/info/0426687b86e341a78b9e35dfc1afb02f 2023-06-07 23:01:19,938 INFO [Listener at localhost/35697] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/info/0426687b86e341a78b9e35dfc1afb02f, entries=2, sequenceid=6, filesize=4.8 K 2023-06-07 23:01:19,939 INFO [Listener at localhost/35697] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5a51b58b5743e1a308ae06e8c83cc4ef in 23ms, sequenceid=6, compaction requested=false 2023-06-07 23:01:19,939 DEBUG [Listener at localhost/35697] regionserver.HRegion(2446): Flush status journal for 5a51b58b5743e1a308ae06e8c83cc4ef: 2023-06-07 23:01:19,945 INFO [Listener at localhost/35697] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178879859 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178879939 2023-06-07 23:01:19,946 DEBUG [Listener at localhost/35697] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36205,DS-1e5b0ccb-8279-41c6-b420-4191875725f3,DISK], DatanodeInfoWithStorage[127.0.0.1:39723,DS-f58f2b4e-e295-495a-806b-e7588845dc70,DISK]] 2023-06-07 23:01:19,946 DEBUG [Listener at localhost/35697] wal.AbstractFSWAL(716): hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178879859 is not closed yet, will try archiving it next time 2023-06-07 23:01:19,946 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178792429 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/oldWALs/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178792429 2023-06-07 23:01:19,947 INFO [Listener at localhost/35697] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-07 23:01:19,948 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178879859 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/oldWALs/jenkins-hbase4.apache.org%2C46751%2C1686178792045.1686178879859 2023-06-07 23:01:20,047 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-07 23:01:20,047 INFO [Listener at localhost/35697] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-07 23:01:20,047 DEBUG [Listener at localhost/35697] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c94c1cb to 127.0.0.1:54282 2023-06-07 23:01:20,047 DEBUG [Listener at localhost/35697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:20,047 DEBUG [Listener at localhost/35697] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-07 23:01:20,047 DEBUG [Listener at localhost/35697] util.JVMClusterUtil(257): Found active master hash=197474393, stopped=false 2023-06-07 23:01:20,048 INFO [Listener at localhost/35697] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 23:01:20,050 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 23:01:20,050 INFO [Listener at localhost/35697] procedure2.ProcedureExecutor(629): Stopping 2023-06-07 23:01:20,050 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 23:01:20,050 DEBUG [Listener at localhost/35697] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x766f7a07 to 127.0.0.1:54282 2023-06-07 23:01:20,050 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:20,051 DEBUG [Listener at localhost/35697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:20,051 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 23:01:20,051 INFO [Listener at localhost/35697] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46751,1686178792045' ***** 2023-06-07 23:01:20,051 INFO [Listener at localhost/35697] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 23:01:20,051 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 23:01:20,051 INFO [RS:0;jenkins-hbase4:46751] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 23:01:20,051 INFO [RS:0;jenkins-hbase4:46751] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 23:01:20,051 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(3303): Received CLOSE for 64cd2307f225d9afb388082e250d5f99 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(3303): Received CLOSE for d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:01:20,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 64cd2307f225d9afb388082e250d5f99, disabling compactions & flushes 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(3303): Received CLOSE for 5a51b58b5743e1a308ae06e8c83cc4ef 2023-06-07 23:01:20,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:01:20,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:20,052 DEBUG [RS:0;jenkins-hbase4:46751] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5ff9ef56 to 127.0.0.1:54282 2023-06-07 23:01:20,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. after waiting 0 ms 2023-06-07 23:01:20,052 DEBUG [RS:0;jenkins-hbase4:46751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:20,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 23:01:20,052 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 23:01:20,053 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-07 23:01:20,054 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1478): Online Regions={64cd2307f225d9afb388082e250d5f99=TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99., d7cbb10a403956efbbba3b6abbb01884=TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884., 1588230740=hbase:meta,,1.1588230740, 5a51b58b5743e1a308ae06e8c83cc4ef=hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef.} 2023-06-07 23:01:20,055 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 23:01:20,056 DEBUG [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1504): Waiting on 1588230740, 5a51b58b5743e1a308ae06e8c83cc4ef, 64cd2307f225d9afb388082e250d5f99, d7cbb10a403956efbbba3b6abbb01884 2023-06-07 23:01:20,058 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 23:01:20,060 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 23:01:20,061 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 23:01:20,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 23:01:20,066 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4->hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe-top, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b21a1531943446c781fd8c3b2cd0c4e1, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/09ff1efdde7d4531ba6bb30de3f923c1, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/4f1f136590724269b23b0518cef2589d, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e3010e41e1e846eca16697383ef697a7, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/3b7a7fb139e14b4482e8ba52e25fadab, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e2a4ed661d6e4cdc949a937af5484a12, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/194d0ba17c41477089d2531fdda1791a, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/cc669bd6aca2411c904a5683ccef0c42, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a8cbb8c6352e455eb4a21c696d4b4f5d, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0ebce097783049d2ad8aad2e9ad367f1, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/c2a76e29bf7e474183a6a6f1e457d68c, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/d46833d0abf545a7a5b0509187119de3, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a512f8ac6c02422793918e4edb2cc379, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/8bb808599a57428db964d5ec360b3279, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b70ea6f1496e4c65a3f49d2cb8cb4f98] to archive 2023-06-07 23:01:20,067 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-07 23:01:20,069 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:01:20,070 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-07 23:01:20,070 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-07 23:01:20,071 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 23:01:20,071 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 23:01:20,071 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-2682407fa2134482a490f072f1f43816 2023-06-07 23:01:20,071 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-07 23:01:20,072 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b21a1531943446c781fd8c3b2cd0c4e1 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b21a1531943446c781fd8c3b2cd0c4e1 2023-06-07 23:01:20,074 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/TestLogRolling-testLogRolling=c8eabbbdd1a3cbfe44b966f5f23e08d4-1fc5d642391f4518a770b20134032c29 2023-06-07 23:01:20,075 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/09ff1efdde7d4531ba6bb30de3f923c1 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/09ff1efdde7d4531ba6bb30de3f923c1 2023-06-07 23:01:20,076 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/4f1f136590724269b23b0518cef2589d to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/4f1f136590724269b23b0518cef2589d 2023-06-07 23:01:20,077 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e3010e41e1e846eca16697383ef697a7 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e3010e41e1e846eca16697383ef697a7 2023-06-07 23:01:20,078 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/3b7a7fb139e14b4482e8ba52e25fadab to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/3b7a7fb139e14b4482e8ba52e25fadab 2023-06-07 23:01:20,080 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e2a4ed661d6e4cdc949a937af5484a12 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/e2a4ed661d6e4cdc949a937af5484a12 2023-06-07 23:01:20,081 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/194d0ba17c41477089d2531fdda1791a to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/194d0ba17c41477089d2531fdda1791a 2023-06-07 23:01:20,082 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/cc669bd6aca2411c904a5683ccef0c42 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/cc669bd6aca2411c904a5683ccef0c42 2023-06-07 23:01:20,084 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a8cbb8c6352e455eb4a21c696d4b4f5d to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a8cbb8c6352e455eb4a21c696d4b4f5d 2023-06-07 23:01:20,085 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0ebce097783049d2ad8aad2e9ad367f1 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/0ebce097783049d2ad8aad2e9ad367f1 2023-06-07 23:01:20,086 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/c2a76e29bf7e474183a6a6f1e457d68c to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/c2a76e29bf7e474183a6a6f1e457d68c 2023-06-07 23:01:20,087 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/d46833d0abf545a7a5b0509187119de3 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/d46833d0abf545a7a5b0509187119de3 2023-06-07 23:01:20,088 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a512f8ac6c02422793918e4edb2cc379 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/a512f8ac6c02422793918e4edb2cc379 2023-06-07 23:01:20,089 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/8bb808599a57428db964d5ec360b3279 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/8bb808599a57428db964d5ec360b3279 2023-06-07 23:01:20,091 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b70ea6f1496e4c65a3f49d2cb8cb4f98 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/info/b70ea6f1496e4c65a3f49d2cb8cb4f98 2023-06-07 23:01:20,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/64cd2307f225d9afb388082e250d5f99/recovered.edits/328.seqid, newMaxSeqId=328, maxSeqId=122 2023-06-07 23:01:20,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 64cd2307f225d9afb388082e250d5f99: 2023-06-07 23:01:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1686178817327.64cd2307f225d9afb388082e250d5f99. 2023-06-07 23:01:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d7cbb10a403956efbbba3b6abbb01884, disabling compactions & flushes 2023-06-07 23:01:20,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:01:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:01:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. after waiting 0 ms 2023-06-07 23:01:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:01:20,097 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4->hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/c8eabbbdd1a3cbfe44b966f5f23e08d4/info/6e27e2b7c3724379883f0ab238cffebe-bottom] to archive 2023-06-07 23:01:20,098 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-07 23:01:20,099 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4 to hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/archive/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/info/6e27e2b7c3724379883f0ab238cffebe.c8eabbbdd1a3cbfe44b966f5f23e08d4 2023-06-07 23:01:20,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/default/TestLogRolling-testLogRolling/d7cbb10a403956efbbba3b6abbb01884/recovered.edits/127.seqid, newMaxSeqId=127, maxSeqId=122 2023-06-07 23:01:20,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:01:20,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d7cbb10a403956efbbba3b6abbb01884: 2023-06-07 23:01:20,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1686178817327.d7cbb10a403956efbbba3b6abbb01884. 2023-06-07 23:01:20,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5a51b58b5743e1a308ae06e8c83cc4ef, disabling compactions & flushes 2023-06-07 23:01:20,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 23:01:20,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 23:01:20,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. after waiting 0 ms 2023-06-07 23:01:20,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 23:01:20,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/data/hbase/namespace/5a51b58b5743e1a308ae06e8c83cc4ef/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-07 23:01:20,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 23:01:20,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5a51b58b5743e1a308ae06e8c83cc4ef: 2023-06-07 23:01:20,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686178792576.5a51b58b5743e1a308ae06e8c83cc4ef. 2023-06-07 23:01:20,259 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46751,1686178792045; all regions closed. 2023-06-07 23:01:20,260 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:01:20,266 DEBUG [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/oldWALs 2023-06-07 23:01:20,266 INFO [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46751%2C1686178792045.meta:.meta(num 1686178792528) 2023-06-07 23:01:20,266 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/WALs/jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:01:20,271 DEBUG [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/oldWALs 2023-06-07 23:01:20,271 INFO [RS:0;jenkins-hbase4:46751] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46751%2C1686178792045:(num 1686178879939) 2023-06-07 23:01:20,271 DEBUG [RS:0;jenkins-hbase4:46751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:20,271 INFO [RS:0;jenkins-hbase4:46751] regionserver.LeaseManager(133): Closed leases 2023-06-07 23:01:20,271 INFO [RS:0;jenkins-hbase4:46751] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-07 23:01:20,271 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 23:01:20,272 INFO [RS:0;jenkins-hbase4:46751] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46751 2023-06-07 23:01:20,274 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46751,1686178792045 2023-06-07 23:01:20,274 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 23:01:20,275 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 23:01:20,276 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46751,1686178792045] 2023-06-07 23:01:20,276 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46751,1686178792045; numProcessing=1 2023-06-07 23:01:20,278 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46751,1686178792045 already deleted, retry=false 2023-06-07 23:01:20,278 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46751,1686178792045 expired; onlineServers=0 2023-06-07 23:01:20,278 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44709,1686178792000' ***** 2023-06-07 23:01:20,278 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-07 23:01:20,279 DEBUG [M:0;jenkins-hbase4:44709] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ffabdf9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 23:01:20,279 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 23:01:20,279 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44709,1686178792000; all regions closed. 2023-06-07 23:01:20,279 DEBUG [M:0;jenkins-hbase4:44709] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:20,279 DEBUG [M:0;jenkins-hbase4:44709] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-07 23:01:20,279 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-07 23:01:20,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178792173] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178792173,5,FailOnTimeoutGroup] 2023-06-07 23:01:20,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178792173] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178792173,5,FailOnTimeoutGroup] 2023-06-07 23:01:20,279 DEBUG [M:0;jenkins-hbase4:44709] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-07 23:01:20,280 INFO [M:0;jenkins-hbase4:44709] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-07 23:01:20,280 INFO [M:0;jenkins-hbase4:44709] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-07 23:01:20,280 INFO [M:0;jenkins-hbase4:44709] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-07 23:01:20,280 DEBUG [M:0;jenkins-hbase4:44709] master.HMaster(1512): Stopping service threads 2023-06-07 23:01:20,280 INFO [M:0;jenkins-hbase4:44709] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-07 23:01:20,281 ERROR [M:0;jenkins-hbase4:44709] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-07 23:01:20,281 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-07 23:01:20,281 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:20,281 INFO [M:0;jenkins-hbase4:44709] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-07 23:01:20,281 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-07 23:01:20,281 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 23:01:20,281 DEBUG [M:0;jenkins-hbase4:44709] zookeeper.ZKUtil(398): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-07 23:01:20,281 WARN [M:0;jenkins-hbase4:44709] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-07 23:01:20,282 INFO [M:0;jenkins-hbase4:44709] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-07 23:01:20,282 INFO [M:0;jenkins-hbase4:44709] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-07 23:01:20,282 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 23:01:20,282 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:20,282 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:20,282 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 23:01:20,282 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:20,282 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.70 KB heapSize=78.42 KB 2023-06-07 23:01:20,291 INFO [M:0;jenkins-hbase4:44709] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.70 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/99167d6f4a914b90b51e31daf48cb7fe 2023-06-07 23:01:20,296 INFO [M:0;jenkins-hbase4:44709] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 99167d6f4a914b90b51e31daf48cb7fe 2023-06-07 23:01:20,297 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/99167d6f4a914b90b51e31daf48cb7fe as hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/99167d6f4a914b90b51e31daf48cb7fe 2023-06-07 23:01:20,302 INFO [M:0;jenkins-hbase4:44709] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 99167d6f4a914b90b51e31daf48cb7fe 2023-06-07 23:01:20,302 INFO [M:0;jenkins-hbase4:44709] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/99167d6f4a914b90b51e31daf48cb7fe, entries=18, sequenceid=160, filesize=6.9 K 2023-06-07 23:01:20,303 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(2948): Finished flush of dataSize ~64.70 KB/66256, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=160, compaction requested=false 2023-06-07 23:01:20,305 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:20,305 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 23:01:20,305 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b8e6e6ed-2fa9-2f62-9ded-4b1ea631c8db/MasterData/WALs/jenkins-hbase4.apache.org,44709,1686178792000 2023-06-07 23:01:20,308 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-07 23:01:20,309 INFO [M:0;jenkins-hbase4:44709] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-07 23:01:20,309 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 23:01:20,309 INFO [M:0;jenkins-hbase4:44709] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44709 2023-06-07 23:01:20,311 DEBUG [M:0;jenkins-hbase4:44709] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44709,1686178792000 already deleted, retry=false 2023-06-07 23:01:20,376 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:20,376 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): regionserver:46751-0x100a78505330001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:20,376 INFO [RS:0;jenkins-hbase4:46751] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46751,1686178792045; zookeeper connection closed. 2023-06-07 23:01:20,377 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@49cef618] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@49cef618 2023-06-07 23:01:20,377 INFO [Listener at localhost/35697] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-07 23:01:20,476 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:20,476 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44709,1686178792000; zookeeper connection closed. 2023-06-07 23:01:20,476 DEBUG [Listener at localhost/35697-EventThread] zookeeper.ZKWatcher(600): master:44709-0x100a78505330000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:20,477 WARN [Listener at localhost/35697] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 23:01:20,481 INFO [Listener at localhost/35697] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 23:01:20,586 WARN [BP-1307749021-172.31.14.131-1686178791469 heartbeating to localhost/127.0.0.1:33443] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 23:01:20,586 WARN [BP-1307749021-172.31.14.131-1686178791469 heartbeating to localhost/127.0.0.1:33443] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1307749021-172.31.14.131-1686178791469 (Datanode Uuid 75de98e3-2846-4a37-a740-387a91cd276c) service to localhost/127.0.0.1:33443 2023-06-07 23:01:20,587 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/dfs/data/data3/current/BP-1307749021-172.31.14.131-1686178791469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:20,587 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/dfs/data/data4/current/BP-1307749021-172.31.14.131-1686178791469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:20,588 WARN [Listener at localhost/35697] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 23:01:20,592 INFO [Listener at localhost/35697] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 23:01:20,696 WARN [BP-1307749021-172.31.14.131-1686178791469 heartbeating to localhost/127.0.0.1:33443] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 23:01:20,696 WARN [BP-1307749021-172.31.14.131-1686178791469 heartbeating to localhost/127.0.0.1:33443] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1307749021-172.31.14.131-1686178791469 (Datanode Uuid abc03f61-6d32-40f5-852b-33b275a6bbbb) service to localhost/127.0.0.1:33443 2023-06-07 23:01:20,697 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/dfs/data/data1/current/BP-1307749021-172.31.14.131-1686178791469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:20,697 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/cluster_092f3fa3-ef0b-e86e-1078-e4eb50e3c5dc/dfs/data/data2/current/BP-1307749021-172.31.14.131-1686178791469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:20,711 INFO [Listener at localhost/35697] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 23:01:20,826 INFO [Listener at localhost/35697] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-07 23:01:20,854 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-07 23:01:20,864 INFO [Listener at localhost/35697] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 95) - Thread LEAK? -, OpenFileDescriptor=539 (was 497) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=4 (was 7), ProcessCount=170 (was 169) - ProcessCount LEAK? -, AvailableMemoryMB=357 (was 552) 2023-06-07 23:01:20,872 INFO [Listener at localhost/35697] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=539, MaxFileDescriptor=60000, SystemLoadAverage=4, ProcessCount=170, AvailableMemoryMB=356 2023-06-07 23:01:20,872 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-07 23:01:20,872 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/hadoop.log.dir so I do NOT create it in target/test-data/ded6075f-840e-2015-8378-613148ec120f 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ae0d4a40-c5b1-c3dd-8930-026cd9bcb417/hadoop.tmp.dir so I do NOT create it in target/test-data/ded6075f-840e-2015-8378-613148ec120f 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96, deleteOnExit=true 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/test.cache.data in system properties and HBase conf 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/hadoop.tmp.dir in system properties and HBase conf 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/hadoop.log.dir in system properties and HBase conf 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-07 23:01:20,873 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-07 23:01:20,873 DEBUG [Listener at localhost/35697] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 23:01:20,874 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-07 23:01:20,875 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/nfs.dump.dir in system properties and HBase conf 2023-06-07 23:01:20,875 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/java.io.tmpdir in system properties and HBase conf 2023-06-07 23:01:20,875 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-07 23:01:20,875 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-07 23:01:20,875 INFO [Listener at localhost/35697] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-07 23:01:20,877 WARN [Listener at localhost/35697] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 23:01:20,879 WARN [Listener at localhost/35697] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 23:01:20,880 WARN [Listener at localhost/35697] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 23:01:20,922 WARN [Listener at localhost/35697] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 23:01:20,924 INFO [Listener at localhost/35697] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 23:01:20,929 INFO [Listener at localhost/35697] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/java.io.tmpdir/Jetty_localhost_40845_hdfs____.hwhp5/webapp 2023-06-07 23:01:21,019 INFO [Listener at localhost/35697] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40845 2023-06-07 23:01:21,020 WARN [Listener at localhost/35697] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-07 23:01:21,023 WARN [Listener at localhost/35697] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-07 23:01:21,023 WARN [Listener at localhost/35697] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-07 23:01:21,059 WARN [Listener at localhost/35959] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 23:01:21,073 WARN [Listener at localhost/35959] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 23:01:21,075 WARN [Listener at localhost/35959] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 23:01:21,076 INFO [Listener at localhost/35959] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 23:01:21,080 INFO [Listener at localhost/35959] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/java.io.tmpdir/Jetty_localhost_34467_datanode____.rf2qsl/webapp 2023-06-07 23:01:21,169 INFO [Listener at localhost/35959] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34467 2023-06-07 23:01:21,177 WARN [Listener at localhost/41585] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 23:01:21,190 WARN [Listener at localhost/41585] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-07 23:01:21,192 WARN [Listener at localhost/41585] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-07 23:01:21,193 INFO [Listener at localhost/41585] log.Slf4jLog(67): jetty-6.1.26 2023-06-07 23:01:21,197 INFO [Listener at localhost/41585] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/java.io.tmpdir/Jetty_localhost_34211_datanode____.6d3mq0/webapp 2023-06-07 23:01:21,267 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d5e92c9b8c8f652: Processing first storage report for DS-017ed4f4-4204-4410-9832-0461bcd1faaf from datanode 237725f3-166f-43d6-9494-4357fb124e2b 2023-06-07 23:01:21,267 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d5e92c9b8c8f652: from storage DS-017ed4f4-4204-4410-9832-0461bcd1faaf node DatanodeRegistration(127.0.0.1:34069, datanodeUuid=237725f3-166f-43d6-9494-4357fb124e2b, infoPort=35927, infoSecurePort=0, ipcPort=41585, storageInfo=lv=-57;cid=testClusterID;nsid=1350018909;c=1686178880882), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 23:01:21,267 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d5e92c9b8c8f652: Processing first storage report for DS-a61af354-37fb-43c5-aef4-1e0d72d8f4da from datanode 237725f3-166f-43d6-9494-4357fb124e2b 2023-06-07 23:01:21,267 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d5e92c9b8c8f652: from storage DS-a61af354-37fb-43c5-aef4-1e0d72d8f4da node DatanodeRegistration(127.0.0.1:34069, datanodeUuid=237725f3-166f-43d6-9494-4357fb124e2b, infoPort=35927, infoSecurePort=0, ipcPort=41585, storageInfo=lv=-57;cid=testClusterID;nsid=1350018909;c=1686178880882), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 23:01:21,295 INFO [Listener at localhost/41585] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34211 2023-06-07 23:01:21,302 WARN [Listener at localhost/36347] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-07 23:01:21,413 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x999c56a63f406c18: Processing first storage report for DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d from datanode d9f90624-aea9-42ae-9648-00adb1bc0ec5 2023-06-07 23:01:21,414 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x999c56a63f406c18: from storage DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d node DatanodeRegistration(127.0.0.1:42069, datanodeUuid=d9f90624-aea9-42ae-9648-00adb1bc0ec5, infoPort=39845, infoSecurePort=0, ipcPort=36347, storageInfo=lv=-57;cid=testClusterID;nsid=1350018909;c=1686178880882), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 23:01:21,414 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x999c56a63f406c18: Processing first storage report for DS-b24448ee-9d0e-4b77-a4f9-11fd261899ae from datanode d9f90624-aea9-42ae-9648-00adb1bc0ec5 2023-06-07 23:01:21,414 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x999c56a63f406c18: from storage DS-b24448ee-9d0e-4b77-a4f9-11fd261899ae node DatanodeRegistration(127.0.0.1:42069, datanodeUuid=d9f90624-aea9-42ae-9648-00adb1bc0ec5, infoPort=39845, infoSecurePort=0, ipcPort=36347, storageInfo=lv=-57;cid=testClusterID;nsid=1350018909;c=1686178880882), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-07 23:01:21,510 DEBUG [Listener at localhost/36347] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f 2023-06-07 23:01:21,513 INFO [Listener at localhost/36347] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/zookeeper_0, clientPort=51773, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-07 23:01:21,514 INFO [Listener at localhost/36347] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51773 2023-06-07 23:01:21,514 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,515 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,528 INFO [Listener at localhost/36347] util.FSUtils(471): Created version file at hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7 with version=8 2023-06-07 23:01:21,528 INFO [Listener at localhost/36347] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43147/user/jenkins/test-data/7c1b1e73-eba5-11cd-8dcd-d9dbd9ee07f9/hbase-staging 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 23:01:21,530 INFO [Listener at localhost/36347] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 23:01:21,531 INFO [Listener at localhost/36347] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36837 2023-06-07 23:01:21,532 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,532 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,533 INFO [Listener at localhost/36347] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36837 connecting to ZooKeeper ensemble=127.0.0.1:51773 2023-06-07 23:01:21,539 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:368370x0, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 23:01:21,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36837-0x100a78662eb0000 connected 2023-06-07 23:01:21,553 DEBUG [Listener at localhost/36347] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 23:01:21,554 DEBUG [Listener at localhost/36347] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 23:01:21,554 DEBUG [Listener at localhost/36347] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 23:01:21,554 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36837 2023-06-07 23:01:21,555 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36837 2023-06-07 23:01:21,555 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36837 2023-06-07 23:01:21,555 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36837 2023-06-07 23:01:21,555 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36837 2023-06-07 23:01:21,555 INFO [Listener at localhost/36347] master.HMaster(444): hbase.rootdir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7, hbase.cluster.distributed=false 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-07 23:01:21,568 INFO [Listener at localhost/36347] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-07 23:01:21,570 INFO [Listener at localhost/36347] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40553 2023-06-07 23:01:21,570 INFO [Listener at localhost/36347] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-07 23:01:21,572 DEBUG [Listener at localhost/36347] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-07 23:01:21,572 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,573 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,574 INFO [Listener at localhost/36347] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40553 connecting to ZooKeeper ensemble=127.0.0.1:51773 2023-06-07 23:01:21,576 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:405530x0, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-07 23:01:21,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40553-0x100a78662eb0001 connected 2023-06-07 23:01:21,578 DEBUG [Listener at localhost/36347] zookeeper.ZKUtil(164): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 23:01:21,578 DEBUG [Listener at localhost/36347] zookeeper.ZKUtil(164): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 23:01:21,579 DEBUG [Listener at localhost/36347] zookeeper.ZKUtil(164): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-07 23:01:21,580 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40553 2023-06-07 23:01:21,580 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40553 2023-06-07 23:01:21,582 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40553 2023-06-07 23:01:21,583 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40553 2023-06-07 23:01:21,584 DEBUG [Listener at localhost/36347] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40553 2023-06-07 23:01:21,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,589 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 23:01:21,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,590 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 23:01:21,590 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-07 23:01:21,590 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 23:01:21,592 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36837,1686178881529 from backup master directory 2023-06-07 23:01:21,592 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-07 23:01:21,593 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,593 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-07 23:01:21,593 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 23:01:21,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,604 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/hbase.id with ID: a79a33aa-a6b3-4469-815c-5f42a5299286 2023-06-07 23:01:21,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:21,615 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6e777c6a to 127.0.0.1:51773 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 23:01:21,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a89b372, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 23:01:21,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-07 23:01:21,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-07 23:01:21,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 23:01:21,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store-tmp 2023-06-07 23:01:21,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:01:21,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 23:01:21,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:21,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:21,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 23:01:21,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:21,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:21,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 23:01:21,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/WALs/jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36837%2C1686178881529, suffix=, logDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/WALs/jenkins-hbase4.apache.org,36837,1686178881529, archiveDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/oldWALs, maxLogs=10 2023-06-07 23:01:21,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/WALs/jenkins-hbase4.apache.org,36837,1686178881529/jenkins-hbase4.apache.org%2C36837%2C1686178881529.1686178881637 2023-06-07 23:01:21,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42069,DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d,DISK], DatanodeInfoWithStorage[127.0.0.1:34069,DS-017ed4f4-4204-4410-9832-0461bcd1faaf,DISK]] 2023-06-07 23:01:21,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-07 23:01:21,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:01:21,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 23:01:21,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 23:01:21,644 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-07 23:01:21,645 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-07 23:01:21,646 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-07 23:01:21,646 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:21,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 23:01:21,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-07 23:01:21,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-07 23:01:21,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 23:01:21,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=692685, jitterRate=-0.11920574307441711}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 23:01:21,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 23:01:21,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-07 23:01:21,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-07 23:01:21,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-07 23:01:21,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-07 23:01:21,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-07 23:01:21,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-07 23:01:21,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-07 23:01:21,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-07 23:01:21,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-07 23:01:21,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-07 23:01:21,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-07 23:01:21,665 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-07 23:01:21,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-07 23:01:21,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-07 23:01:21,668 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-07 23:01:21,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-07 23:01:21,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-07 23:01:21,670 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 23:01:21,670 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-07 23:01:21,670 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36837,1686178881529, sessionid=0x100a78662eb0000, setting cluster-up flag (Was=false) 2023-06-07 23:01:21,675 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-07 23:01:21,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,683 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-07 23:01:21,689 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:21,689 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/.hbase-snapshot/.tmp 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 23:01:21,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686178911695 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-07 23:01:21,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-07 23:01:21,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,697 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 23:01:21,697 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-07 23:01:21,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-07 23:01:21,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-07 23:01:21,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-07 23:01:21,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-07 23:01:21,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-07 23:01:21,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178881698,5,FailOnTimeoutGroup] 2023-06-07 23:01:21,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178881698,5,FailOnTimeoutGroup] 2023-06-07 23:01:21,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-07 23:01:21,699 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 23:01:21,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,706 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 23:01:21,707 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-07 23:01:21,707 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7 2023-06-07 23:01:21,713 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:01:21,714 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 23:01:21,715 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/info 2023-06-07 23:01:21,716 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 23:01:21,716 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:21,716 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 23:01:21,717 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/rep_barrier 2023-06-07 23:01:21,718 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 23:01:21,718 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:21,718 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 23:01:21,719 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/table 2023-06-07 23:01:21,719 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 23:01:21,720 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:21,720 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740 2023-06-07 23:01:21,721 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740 2023-06-07 23:01:21,722 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 23:01:21,723 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 23:01:21,725 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 23:01:21,725 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=798989, jitterRate=0.015967190265655518}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 23:01:21,725 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 23:01:21,725 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 23:01:21,725 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 23:01:21,725 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 23:01:21,725 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 23:01:21,726 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 23:01:21,726 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 23:01:21,726 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 23:01:21,726 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-07 23:01:21,727 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-07 23:01:21,727 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-07 23:01:21,728 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-07 23:01:21,729 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-07 23:01:21,786 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(951): ClusterId : a79a33aa-a6b3-4469-815c-5f42a5299286 2023-06-07 23:01:21,787 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-07 23:01:21,789 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-07 23:01:21,789 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-07 23:01:21,791 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-07 23:01:21,792 DEBUG [RS:0;jenkins-hbase4:40553] zookeeper.ReadOnlyZKClient(139): Connect 0x463f859f to 127.0.0.1:51773 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 23:01:21,795 DEBUG [RS:0;jenkins-hbase4:40553] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48f87328, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 23:01:21,795 DEBUG [RS:0;jenkins-hbase4:40553] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4cc7eeef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 23:01:21,804 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40553 2023-06-07 23:01:21,804 INFO [RS:0;jenkins-hbase4:40553] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-07 23:01:21,804 INFO [RS:0;jenkins-hbase4:40553] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-07 23:01:21,804 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1022): About to register with Master. 2023-06-07 23:01:21,804 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36837,1686178881529 with isa=jenkins-hbase4.apache.org/172.31.14.131:40553, startcode=1686178881567 2023-06-07 23:01:21,804 DEBUG [RS:0;jenkins-hbase4:40553] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-07 23:01:21,807 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58251, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-07 23:01:21,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36837] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:21,808 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7 2023-06-07 23:01:21,808 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35959 2023-06-07 23:01:21,809 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-07 23:01:21,811 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 23:01:21,811 DEBUG [RS:0;jenkins-hbase4:40553] zookeeper.ZKUtil(162): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:21,811 WARN [RS:0;jenkins-hbase4:40553] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-07 23:01:21,811 INFO [RS:0;jenkins-hbase4:40553] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 23:01:21,812 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:21,814 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40553,1686178881567] 2023-06-07 23:01:21,816 DEBUG [RS:0;jenkins-hbase4:40553] zookeeper.ZKUtil(162): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:21,817 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-07 23:01:21,817 INFO [RS:0;jenkins-hbase4:40553] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-07 23:01:21,818 INFO [RS:0;jenkins-hbase4:40553] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-07 23:01:21,819 INFO [RS:0;jenkins-hbase4:40553] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-07 23:01:21,819 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,819 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-07 23:01:21,820 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,821 DEBUG [RS:0;jenkins-hbase4:40553] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-07 23:01:21,822 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,822 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,822 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,832 INFO [RS:0;jenkins-hbase4:40553] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-07 23:01:21,832 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40553,1686178881567-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:21,844 INFO [RS:0;jenkins-hbase4:40553] regionserver.Replication(203): jenkins-hbase4.apache.org,40553,1686178881567 started 2023-06-07 23:01:21,844 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40553,1686178881567, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40553, sessionid=0x100a78662eb0001 2023-06-07 23:01:21,844 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-07 23:01:21,844 DEBUG [RS:0;jenkins-hbase4:40553] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:21,844 DEBUG [RS:0;jenkins-hbase4:40553] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40553,1686178881567' 2023-06-07 23:01:21,844 DEBUG [RS:0;jenkins-hbase4:40553] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-07 23:01:21,844 DEBUG [RS:0;jenkins-hbase4:40553] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40553,1686178881567' 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-07 23:01:21,845 DEBUG [RS:0;jenkins-hbase4:40553] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-07 23:01:21,845 INFO [RS:0;jenkins-hbase4:40553] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-07 23:01:21,845 INFO [RS:0;jenkins-hbase4:40553] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-07 23:01:21,879 DEBUG [jenkins-hbase4:36837] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-07 23:01:21,880 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40553,1686178881567, state=OPENING 2023-06-07 23:01:21,881 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-07 23:01:21,882 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:21,883 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40553,1686178881567}] 2023-06-07 23:01:21,883 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 23:01:21,947 INFO [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40553%2C1686178881567, suffix=, logDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567, archiveDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs, maxLogs=32 2023-06-07 23:01:21,954 INFO [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567/jenkins-hbase4.apache.org%2C40553%2C1686178881567.1686178881947 2023-06-07 23:01:21,954 DEBUG [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42069,DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d,DISK], DatanodeInfoWithStorage[127.0.0.1:34069,DS-017ed4f4-4204-4410-9832-0461bcd1faaf,DISK]] 2023-06-07 23:01:22,036 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,036 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-07 23:01:22,038 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37162, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-07 23:01:22,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-07 23:01:22,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 23:01:22,043 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40553%2C1686178881567.meta, suffix=.meta, logDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567, archiveDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs, maxLogs=32 2023-06-07 23:01:22,049 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567/jenkins-hbase4.apache.org%2C40553%2C1686178881567.meta.1686178882043.meta 2023-06-07 23:01:22,049 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42069,DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d,DISK], DatanodeInfoWithStorage[127.0.0.1:34069,DS-017ed4f4-4204-4410-9832-0461bcd1faaf,DISK]] 2023-06-07 23:01:22,049 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-07 23:01:22,050 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-07 23:01:22,050 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-07 23:01:22,050 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-07 23:01:22,050 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-07 23:01:22,050 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:01:22,050 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-07 23:01:22,050 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-07 23:01:22,051 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-07 23:01:22,052 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/info 2023-06-07 23:01:22,052 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/info 2023-06-07 23:01:22,052 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-07 23:01:22,053 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:22,053 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-07 23:01:22,054 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/rep_barrier 2023-06-07 23:01:22,054 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/rep_barrier 2023-06-07 23:01:22,054 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-07 23:01:22,055 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:22,055 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-07 23:01:22,055 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/table 2023-06-07 23:01:22,056 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/table 2023-06-07 23:01:22,056 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-07 23:01:22,056 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:22,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740 2023-06-07 23:01:22,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740 2023-06-07 23:01:22,060 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-07 23:01:22,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-07 23:01:22,062 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=746004, jitterRate=-0.05140778422355652}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-07 23:01:22,062 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-07 23:01:22,063 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686178882036 2023-06-07 23:01:22,066 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-07 23:01:22,067 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-07 23:01:22,067 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40553,1686178881567, state=OPEN 2023-06-07 23:01:22,069 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-07 23:01:22,069 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-07 23:01:22,071 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-07 23:01:22,071 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40553,1686178881567 in 186 msec 2023-06-07 23:01:22,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-07 23:01:22,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 344 msec 2023-06-07 23:01:22,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 383 msec 2023-06-07 23:01:22,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686178882074, completionTime=-1 2023-06-07 23:01:22,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-07 23:01:22,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-07 23:01:22,076 DEBUG [hconnection-0x104024a0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 23:01:22,078 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37168, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 23:01:22,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-07 23:01:22,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686178942079 2023-06-07 23:01:22,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686179002079 2023-06-07 23:01:22,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36837,1686178881529-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36837,1686178881529-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36837,1686178881529-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36837, period=300000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-07 23:01:22,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-07 23:01:22,087 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-07 23:01:22,087 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-07 23:01:22,088 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-07 23:01:22,089 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-07 23:01:22,090 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/.tmp/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,091 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/.tmp/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467 empty. 2023-06-07 23:01:22,091 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/.tmp/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,091 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-07 23:01:22,101 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-07 23:01:22,102 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1a2e2527ddbf525b6e4afba0b3504467, NAME => 'hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/.tmp 2023-06-07 23:01:22,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:01:22,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1a2e2527ddbf525b6e4afba0b3504467, disabling compactions & flushes 2023-06-07 23:01:22,109 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. after waiting 0 ms 2023-06-07 23:01:22,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,109 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1a2e2527ddbf525b6e4afba0b3504467: 2023-06-07 23:01:22,111 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-07 23:01:22,112 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178882112"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686178882112"}]},"ts":"1686178882112"} 2023-06-07 23:01:22,114 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-07 23:01:22,115 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-07 23:01:22,115 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178882115"}]},"ts":"1686178882115"} 2023-06-07 23:01:22,116 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-07 23:01:22,123 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1a2e2527ddbf525b6e4afba0b3504467, ASSIGN}] 2023-06-07 23:01:22,125 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1a2e2527ddbf525b6e4afba0b3504467, ASSIGN 2023-06-07 23:01:22,126 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1a2e2527ddbf525b6e4afba0b3504467, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40553,1686178881567; forceNewPlan=false, retain=false 2023-06-07 23:01:22,276 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1a2e2527ddbf525b6e4afba0b3504467, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,277 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178882276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1686178882276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686178882276"}]},"ts":"1686178882276"} 2023-06-07 23:01:22,278 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 1a2e2527ddbf525b6e4afba0b3504467, server=jenkins-hbase4.apache.org,40553,1686178881567}] 2023-06-07 23:01:22,433 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1a2e2527ddbf525b6e4afba0b3504467, NAME => 'hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.', STARTKEY => '', ENDKEY => ''} 2023-06-07 23:01:22,434 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,434 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-07 23:01:22,434 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,434 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,435 INFO [StoreOpener-1a2e2527ddbf525b6e4afba0b3504467-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,436 DEBUG [StoreOpener-1a2e2527ddbf525b6e4afba0b3504467-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/info 2023-06-07 23:01:22,436 DEBUG [StoreOpener-1a2e2527ddbf525b6e4afba0b3504467-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/info 2023-06-07 23:01:22,436 INFO [StoreOpener-1a2e2527ddbf525b6e4afba0b3504467-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a2e2527ddbf525b6e4afba0b3504467 columnFamilyName info 2023-06-07 23:01:22,437 INFO [StoreOpener-1a2e2527ddbf525b6e4afba0b3504467-1] regionserver.HStore(310): Store=1a2e2527ddbf525b6e4afba0b3504467/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-07 23:01:22,438 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,438 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,440 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,442 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-07 23:01:22,442 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1a2e2527ddbf525b6e4afba0b3504467; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=867533, jitterRate=0.10312587022781372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-07 23:01:22,442 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1a2e2527ddbf525b6e4afba0b3504467: 2023-06-07 23:01:22,444 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467., pid=6, masterSystemTime=1686178882430 2023-06-07 23:01:22,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,446 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,447 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1a2e2527ddbf525b6e4afba0b3504467, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,447 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686178882447"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1686178882447"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686178882447"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686178882447"}]},"ts":"1686178882447"} 2023-06-07 23:01:22,450 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-07 23:01:22,450 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 1a2e2527ddbf525b6e4afba0b3504467, server=jenkins-hbase4.apache.org,40553,1686178881567 in 170 msec 2023-06-07 23:01:22,452 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-07 23:01:22,452 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1a2e2527ddbf525b6e4afba0b3504467, ASSIGN in 328 msec 2023-06-07 23:01:22,452 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-07 23:01:22,453 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686178882453"}]},"ts":"1686178882453"} 2023-06-07 23:01:22,454 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-07 23:01:22,456 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-07 23:01:22,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 370 msec 2023-06-07 23:01:22,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-07 23:01:22,489 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-07 23:01:22,489 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:22,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-07 23:01:22,499 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 23:01:22,503 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-06-07 23:01:22,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-07 23:01:22,520 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-07 23:01:22,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-06-07 23:01:22,538 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-07 23:01:22,540 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-07 23:01:22,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.947sec 2023-06-07 23:01:22,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-07 23:01:22,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-07 23:01:22,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-07 23:01:22,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36837,1686178881529-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-07 23:01:22,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36837,1686178881529-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-07 23:01:22,542 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-07 23:01:22,586 DEBUG [Listener at localhost/36347] zookeeper.ReadOnlyZKClient(139): Connect 0x6ed7e22a to 127.0.0.1:51773 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-07 23:01:22,591 DEBUG [Listener at localhost/36347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7adaff63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-07 23:01:22,595 DEBUG [hconnection-0x5c019c6a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-07 23:01:22,597 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37180, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-07 23:01:22,598 INFO [Listener at localhost/36347] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:22,598 INFO [Listener at localhost/36347] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-07 23:01:22,603 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-07 23:01:22,603 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:22,604 INFO [Listener at localhost/36347] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-07 23:01:22,604 INFO [Listener at localhost/36347] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-07 23:01:22,606 INFO [Listener at localhost/36347] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1, archiveDir=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs, maxLogs=32 2023-06-07 23:01:22,611 INFO [Listener at localhost/36347] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1/test.com%2C8080%2C1.1686178882606 2023-06-07 23:01:22,611 DEBUG [Listener at localhost/36347] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34069,DS-017ed4f4-4204-4410-9832-0461bcd1faaf,DISK], DatanodeInfoWithStorage[127.0.0.1:42069,DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d,DISK]] 2023-06-07 23:01:22,617 INFO [Listener at localhost/36347] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1/test.com%2C8080%2C1.1686178882606 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1/test.com%2C8080%2C1.1686178882611 2023-06-07 23:01:22,617 DEBUG [Listener at localhost/36347] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42069,DS-f08eb48c-9bc7-4730-9358-e89f30d64f9d,DISK], DatanodeInfoWithStorage[127.0.0.1:34069,DS-017ed4f4-4204-4410-9832-0461bcd1faaf,DISK]] 2023-06-07 23:01:22,617 DEBUG [Listener at localhost/36347] wal.AbstractFSWAL(716): hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1/test.com%2C8080%2C1.1686178882606 is not closed yet, will try archiving it next time 2023-06-07 23:01:22,617 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1 2023-06-07 23:01:22,624 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/test.com,8080,1/test.com%2C8080%2C1.1686178882606 to hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs/test.com%2C8080%2C1.1686178882606 2023-06-07 23:01:22,626 DEBUG [Listener at localhost/36347] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs 2023-06-07 23:01:22,626 INFO [Listener at localhost/36347] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1686178882611) 2023-06-07 23:01:22,626 INFO [Listener at localhost/36347] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-07 23:01:22,627 DEBUG [Listener at localhost/36347] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ed7e22a to 127.0.0.1:51773 2023-06-07 23:01:22,627 DEBUG [Listener at localhost/36347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:22,627 DEBUG [Listener at localhost/36347] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-07 23:01:22,627 DEBUG [Listener at localhost/36347] util.JVMClusterUtil(257): Found active master hash=1814654792, stopped=false 2023-06-07 23:01:22,628 INFO [Listener at localhost/36347] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:22,629 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 23:01:22,629 INFO [Listener at localhost/36347] procedure2.ProcedureExecutor(629): Stopping 2023-06-07 23:01:22,629 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-07 23:01:22,629 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:22,631 DEBUG [Listener at localhost/36347] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e777c6a to 127.0.0.1:51773 2023-06-07 23:01:22,631 DEBUG [Listener at localhost/36347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:22,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 23:01:22,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-07 23:01:22,631 INFO [Listener at localhost/36347] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,40553,1686178881567' ***** 2023-06-07 23:01:22,631 INFO [Listener at localhost/36347] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-07 23:01:22,631 INFO [RS:0;jenkins-hbase4:40553] regionserver.HeapMemoryManager(220): Stopping 2023-06-07 23:01:22,631 INFO [RS:0;jenkins-hbase4:40553] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-07 23:01:22,631 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-07 23:01:22,631 INFO [RS:0;jenkins-hbase4:40553] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-07 23:01:22,632 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(3303): Received CLOSE for 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,632 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,632 DEBUG [RS:0;jenkins-hbase4:40553] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x463f859f to 127.0.0.1:51773 2023-06-07 23:01:22,632 DEBUG [RS:0;jenkins-hbase4:40553] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:22,632 INFO [RS:0;jenkins-hbase4:40553] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-07 23:01:22,633 INFO [RS:0;jenkins-hbase4:40553] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-07 23:01:22,633 INFO [RS:0;jenkins-hbase4:40553] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-07 23:01:22,633 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-07 23:01:22,633 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-07 23:01:22,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1a2e2527ddbf525b6e4afba0b3504467, disabling compactions & flushes 2023-06-07 23:01:22,633 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 1a2e2527ddbf525b6e4afba0b3504467=hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467.} 2023-06-07 23:01:22,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,633 DEBUG [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1504): Waiting on 1588230740, 1a2e2527ddbf525b6e4afba0b3504467 2023-06-07 23:01:22,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,633 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-07 23:01:22,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. after waiting 1 ms 2023-06-07 23:01:22,634 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-07 23:01:22,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-07 23:01:22,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1a2e2527ddbf525b6e4afba0b3504467 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-07 23:01:22,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-07 23:01:22,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-07 23:01:22,634 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-07 23:01:22,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/.tmp/info/fb53ec00ad69400fbfbc698f9235fb55 2023-06-07 23:01:22,645 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/.tmp/info/08a11715dd9e451ead529875a4a42403 2023-06-07 23:01:22,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/.tmp/info/fb53ec00ad69400fbfbc698f9235fb55 as hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/info/fb53ec00ad69400fbfbc698f9235fb55 2023-06-07 23:01:22,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/info/fb53ec00ad69400fbfbc698f9235fb55, entries=2, sequenceid=6, filesize=4.8 K 2023-06-07 23:01:22,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 1a2e2527ddbf525b6e4afba0b3504467 in 20ms, sequenceid=6, compaction requested=false 2023-06-07 23:01:22,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-07 23:01:22,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/namespace/1a2e2527ddbf525b6e4afba0b3504467/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-07 23:01:22,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1a2e2527ddbf525b6e4afba0b3504467: 2023-06-07 23:01:22,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686178882086.1a2e2527ddbf525b6e4afba0b3504467. 2023-06-07 23:01:22,661 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/.tmp/table/03c1966bf9934500949a745bc39ed9d5 2023-06-07 23:01:22,665 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/.tmp/info/08a11715dd9e451ead529875a4a42403 as hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/info/08a11715dd9e451ead529875a4a42403 2023-06-07 23:01:22,669 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/info/08a11715dd9e451ead529875a4a42403, entries=10, sequenceid=9, filesize=5.9 K 2023-06-07 23:01:22,670 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/.tmp/table/03c1966bf9934500949a745bc39ed9d5 as hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/table/03c1966bf9934500949a745bc39ed9d5 2023-06-07 23:01:22,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/table/03c1966bf9934500949a745bc39ed9d5, entries=2, sequenceid=9, filesize=4.7 K 2023-06-07 23:01:22,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 41ms, sequenceid=9, compaction requested=false 2023-06-07 23:01:22,675 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-07 23:01:22,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-07 23:01:22,682 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-07 23:01:22,682 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-07 23:01:22,682 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-07 23:01:22,682 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-07 23:01:22,830 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-07 23:01:22,830 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-07 23:01:22,834 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40553,1686178881567; all regions closed. 2023-06-07 23:01:22,834 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,838 DEBUG [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs 2023-06-07 23:01:22,838 INFO [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C40553%2C1686178881567.meta:.meta(num 1686178882043) 2023-06-07 23:01:22,839 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/WALs/jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,843 DEBUG [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/oldWALs 2023-06-07 23:01:22,843 INFO [RS:0;jenkins-hbase4:40553] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C40553%2C1686178881567:(num 1686178881947) 2023-06-07 23:01:22,843 DEBUG [RS:0;jenkins-hbase4:40553] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:22,843 INFO [RS:0;jenkins-hbase4:40553] regionserver.LeaseManager(133): Closed leases 2023-06-07 23:01:22,843 INFO [RS:0;jenkins-hbase4:40553] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-07 23:01:22,843 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 23:01:22,844 INFO [RS:0;jenkins-hbase4:40553] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40553 2023-06-07 23:01:22,846 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 23:01:22,846 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40553,1686178881567 2023-06-07 23:01:22,846 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-07 23:01:22,847 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40553,1686178881567] 2023-06-07 23:01:22,847 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40553,1686178881567; numProcessing=1 2023-06-07 23:01:22,849 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40553,1686178881567 already deleted, retry=false 2023-06-07 23:01:22,849 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40553,1686178881567 expired; onlineServers=0 2023-06-07 23:01:22,849 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36837,1686178881529' ***** 2023-06-07 23:01:22,849 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-07 23:01:22,849 DEBUG [M:0;jenkins-hbase4:36837] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6816e615, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-07 23:01:22,849 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:22,850 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36837,1686178881529; all regions closed. 2023-06-07 23:01:22,850 DEBUG [M:0;jenkins-hbase4:36837] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-07 23:01:22,850 DEBUG [M:0;jenkins-hbase4:36837] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-07 23:01:22,850 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-07 23:01:22,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178881698] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1686178881698,5,FailOnTimeoutGroup] 2023-06-07 23:01:22,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178881698] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1686178881698,5,FailOnTimeoutGroup] 2023-06-07 23:01:22,850 DEBUG [M:0;jenkins-hbase4:36837] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-07 23:01:22,851 INFO [M:0;jenkins-hbase4:36837] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-07 23:01:22,851 INFO [M:0;jenkins-hbase4:36837] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-07 23:01:22,851 INFO [M:0;jenkins-hbase4:36837] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-07 23:01:22,851 DEBUG [M:0;jenkins-hbase4:36837] master.HMaster(1512): Stopping service threads 2023-06-07 23:01:22,851 INFO [M:0;jenkins-hbase4:36837] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-07 23:01:22,851 ERROR [M:0;jenkins-hbase4:36837] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-07 23:01:22,852 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-07 23:01:22,852 INFO [M:0;jenkins-hbase4:36837] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-07 23:01:22,852 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-07 23:01:22,852 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-07 23:01:22,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-07 23:01:22,852 DEBUG [M:0;jenkins-hbase4:36837] zookeeper.ZKUtil(398): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-07 23:01:22,852 WARN [M:0;jenkins-hbase4:36837] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-07 23:01:22,852 INFO [M:0;jenkins-hbase4:36837] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-07 23:01:22,852 INFO [M:0;jenkins-hbase4:36837] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-07 23:01:22,853 DEBUG [M:0;jenkins-hbase4:36837] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-07 23:01:22,853 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:22,853 DEBUG [M:0;jenkins-hbase4:36837] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:22,853 DEBUG [M:0;jenkins-hbase4:36837] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-07 23:01:22,853 DEBUG [M:0;jenkins-hbase4:36837] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:22,853 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-06-07 23:01:22,861 INFO [M:0;jenkins-hbase4:36837] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7b47337995ea4a6db20796b48e5b61f0 2023-06-07 23:01:22,865 DEBUG [M:0;jenkins-hbase4:36837] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7b47337995ea4a6db20796b48e5b61f0 as hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7b47337995ea4a6db20796b48e5b61f0 2023-06-07 23:01:22,869 INFO [M:0;jenkins-hbase4:36837] regionserver.HStore(1080): Added hdfs://localhost:35959/user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7b47337995ea4a6db20796b48e5b61f0, entries=8, sequenceid=66, filesize=6.3 K 2023-06-07 23:01:22,870 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 16ms, sequenceid=66, compaction requested=false 2023-06-07 23:01:22,871 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-07 23:01:22,871 DEBUG [M:0;jenkins-hbase4:36837] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-07 23:01:22,871 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c3c19a8e-0737-e59d-49d8-e941bf18bca7/MasterData/WALs/jenkins-hbase4.apache.org,36837,1686178881529 2023-06-07 23:01:22,874 INFO [M:0;jenkins-hbase4:36837] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-07 23:01:22,874 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-07 23:01:22,874 INFO [M:0;jenkins-hbase4:36837] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36837 2023-06-07 23:01:22,877 DEBUG [M:0;jenkins-hbase4:36837] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36837,1686178881529 already deleted, retry=false 2023-06-07 23:01:23,030 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:23,030 INFO [M:0;jenkins-hbase4:36837] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36837,1686178881529; zookeeper connection closed. 2023-06-07 23:01:23,030 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): master:36837-0x100a78662eb0000, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:23,130 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:23,130 INFO [RS:0;jenkins-hbase4:40553] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40553,1686178881567; zookeeper connection closed. 2023-06-07 23:01:23,130 DEBUG [Listener at localhost/36347-EventThread] zookeeper.ZKWatcher(600): regionserver:40553-0x100a78662eb0001, quorum=127.0.0.1:51773, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-07 23:01:23,130 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@a233a36] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@a233a36 2023-06-07 23:01:23,131 INFO [Listener at localhost/36347] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-07 23:01:23,131 WARN [Listener at localhost/36347] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 23:01:23,134 INFO [Listener at localhost/36347] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 23:01:23,239 WARN [BP-1630584765-172.31.14.131-1686178880882 heartbeating to localhost/127.0.0.1:35959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-07 23:01:23,239 WARN [BP-1630584765-172.31.14.131-1686178880882 heartbeating to localhost/127.0.0.1:35959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1630584765-172.31.14.131-1686178880882 (Datanode Uuid d9f90624-aea9-42ae-9648-00adb1bc0ec5) service to localhost/127.0.0.1:35959 2023-06-07 23:01:23,239 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/dfs/data/data3/current/BP-1630584765-172.31.14.131-1686178880882] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:23,240 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/dfs/data/data4/current/BP-1630584765-172.31.14.131-1686178880882] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:23,240 WARN [Listener at localhost/36347] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-07 23:01:23,243 INFO [Listener at localhost/36347] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 23:01:23,265 WARN [BP-1630584765-172.31.14.131-1686178880882 heartbeating to localhost/127.0.0.1:35959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1630584765-172.31.14.131-1686178880882 (Datanode Uuid 237725f3-166f-43d6-9494-4357fb124e2b) service to localhost/127.0.0.1:35959 2023-06-07 23:01:23,266 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/dfs/data/data1/current/BP-1630584765-172.31.14.131-1686178880882] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:23,266 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ded6075f-840e-2015-8378-613148ec120f/cluster_1d25ae8e-40fc-c9b6-6a18-c1fe34ab1b96/dfs/data/data2/current/BP-1630584765-172.31.14.131-1686178880882] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-07 23:01:23,355 INFO [Listener at localhost/36347] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-07 23:01:23,466 INFO [Listener at localhost/36347] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-07 23:01:23,478 INFO [Listener at localhost/36347] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-07 23:01:23,489 INFO [Listener at localhost/36347] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=131 (was 107) - Thread LEAK? -, OpenFileDescriptor=562 (was 539) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=52 (was 4) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=349 (was 356)