2023-06-03 08:56:00,435 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9 2023-06-03 08:56:00,448 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-03 08:56:00,478 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=221, ProcessCount=172, AvailableMemoryMB=2667 2023-06-03 08:56:00,484 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-03 08:56:00,485 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140, deleteOnExit=true 2023-06-03 08:56:00,485 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-03 08:56:00,486 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/test.cache.data in system properties and HBase conf 2023-06-03 08:56:00,486 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/hadoop.tmp.dir in system properties and HBase conf 2023-06-03 08:56:00,486 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/hadoop.log.dir in system properties and HBase conf 2023-06-03 08:56:00,487 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-03 08:56:00,488 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-03 08:56:00,488 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-03 08:56:00,591 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-03 08:56:00,949 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-03 08:56:00,953 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:56:00,953 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:56:00,953 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-03 08:56:00,953 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:56:00,954 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-03 08:56:00,954 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-03 08:56:00,954 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:56:00,955 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:56:00,955 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-03 08:56:00,955 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/nfs.dump.dir in system properties and HBase conf 2023-06-03 08:56:00,955 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/java.io.tmpdir in system properties and HBase conf 2023-06-03 08:56:00,956 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:56:00,956 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-03 08:56:00,956 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-03 08:56:01,439 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:56:01,453 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:56:01,458 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:56:01,729 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-03 08:56:01,878 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-03 08:56:01,892 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:56:01,925 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:56:01,983 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/java.io.tmpdir/Jetty_localhost_37113_hdfs____cqj1eh/webapp 2023-06-03 08:56:02,105 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37113 2023-06-03 08:56:02,112 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:56:02,115 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:56:02,116 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:56:02,572 WARN [Listener at localhost/36003] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:56:02,645 WARN [Listener at localhost/36003] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:56:02,663 WARN [Listener at localhost/36003] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:56:02,669 INFO [Listener at localhost/36003] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:56:02,674 INFO [Listener at localhost/36003] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/java.io.tmpdir/Jetty_localhost_33347_datanode____.r6ojgj/webapp 2023-06-03 08:56:02,787 INFO [Listener at localhost/36003] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33347 2023-06-03 08:56:03,127 WARN [Listener at localhost/34771] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:56:03,138 WARN [Listener at localhost/34771] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:56:03,146 WARN [Listener at localhost/34771] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:56:03,149 INFO [Listener at localhost/34771] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:56:03,155 INFO [Listener at localhost/34771] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/java.io.tmpdir/Jetty_localhost_38241_datanode____5t6pw1/webapp 2023-06-03 08:56:03,258 INFO [Listener at localhost/34771] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38241 2023-06-03 08:56:03,266 WARN [Listener at localhost/33639] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:56:03,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4d43f5c7c14eaf8: Processing first storage report for DS-c09e337e-6433-4076-9464-194b414e324f from datanode 7e307dba-ed47-4d57-856c-32b11c4f4ba2 2023-06-03 08:56:03,573 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4d43f5c7c14eaf8: from storage DS-c09e337e-6433-4076-9464-194b414e324f node DatanodeRegistration(127.0.0.1:41335, datanodeUuid=7e307dba-ed47-4d57-856c-32b11c4f4ba2, infoPort=36867, infoSecurePort=0, ipcPort=34771, storageInfo=lv=-57;cid=testClusterID;nsid=706361339;c=1685782561531), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-06-03 08:56:03,573 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe1a8336f1e1d33e6: Processing first storage report for DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9 from datanode 0a444cd6-590e-47e7-b10e-5c07122333ad 2023-06-03 08:56:03,574 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe1a8336f1e1d33e6: from storage DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9 node DatanodeRegistration(127.0.0.1:38615, datanodeUuid=0a444cd6-590e-47e7-b10e-5c07122333ad, infoPort=34983, infoSecurePort=0, ipcPort=33639, storageInfo=lv=-57;cid=testClusterID;nsid=706361339;c=1685782561531), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:56:03,574 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4d43f5c7c14eaf8: Processing first storage report for DS-7b3f6194-2769-495c-9e29-6f45b5809066 from datanode 7e307dba-ed47-4d57-856c-32b11c4f4ba2 2023-06-03 08:56:03,574 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4d43f5c7c14eaf8: from storage DS-7b3f6194-2769-495c-9e29-6f45b5809066 node DatanodeRegistration(127.0.0.1:41335, datanodeUuid=7e307dba-ed47-4d57-856c-32b11c4f4ba2, infoPort=36867, infoSecurePort=0, ipcPort=34771, storageInfo=lv=-57;cid=testClusterID;nsid=706361339;c=1685782561531), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:56:03,574 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe1a8336f1e1d33e6: Processing first storage report for DS-68a69b27-f5ea-46de-bd4e-3eba1f4b341c from datanode 0a444cd6-590e-47e7-b10e-5c07122333ad 2023-06-03 08:56:03,574 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe1a8336f1e1d33e6: from storage DS-68a69b27-f5ea-46de-bd4e-3eba1f4b341c node DatanodeRegistration(127.0.0.1:38615, datanodeUuid=0a444cd6-590e-47e7-b10e-5c07122333ad, infoPort=34983, infoSecurePort=0, ipcPort=33639, storageInfo=lv=-57;cid=testClusterID;nsid=706361339;c=1685782561531), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:56:03,651 DEBUG [Listener at localhost/33639] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9 2023-06-03 08:56:03,710 INFO [Listener at localhost/33639] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/zookeeper_0, clientPort=54109, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-03 08:56:03,722 INFO [Listener at localhost/33639] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54109 2023-06-03 08:56:03,733 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:03,735 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:04,385 INFO [Listener at localhost/33639] util.FSUtils(471): Created version file at hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f with version=8 2023-06-03 08:56:04,385 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase-staging 2023-06-03 08:56:04,685 INFO [Listener at localhost/33639] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-03 08:56:05,159 INFO [Listener at localhost/33639] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:56:05,192 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:56:05,192 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:56:05,193 INFO [Listener at localhost/33639] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:56:05,193 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:56:05,193 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:56:05,344 INFO [Listener at localhost/33639] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:56:05,423 DEBUG [Listener at localhost/33639] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-03 08:56:05,516 INFO [Listener at localhost/33639] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44765 2023-06-03 08:56:05,526 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:05,528 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:05,548 INFO [Listener at localhost/33639] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44765 connecting to ZooKeeper ensemble=127.0.0.1:54109 2023-06-03 08:56:05,586 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:447650x0, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:56:05,588 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44765-0x1008fe70e730000 connected 2023-06-03 08:56:05,611 DEBUG [Listener at localhost/33639] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:56:05,612 DEBUG [Listener at localhost/33639] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:56:05,615 DEBUG [Listener at localhost/33639] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:56:05,623 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44765 2023-06-03 08:56:05,623 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44765 2023-06-03 08:56:05,624 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44765 2023-06-03 08:56:05,624 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44765 2023-06-03 08:56:05,625 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44765 2023-06-03 08:56:05,630 INFO [Listener at localhost/33639] master.HMaster(444): hbase.rootdir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f, hbase.cluster.distributed=false 2023-06-03 08:56:05,699 INFO [Listener at localhost/33639] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:56:05,700 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:56:05,700 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:56:05,700 INFO [Listener at localhost/33639] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:56:05,700 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:56:05,700 INFO [Listener at localhost/33639] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:56:05,706 INFO [Listener at localhost/33639] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:56:05,709 INFO [Listener at localhost/33639] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40163 2023-06-03 08:56:05,711 INFO [Listener at localhost/33639] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 08:56:05,717 DEBUG [Listener at localhost/33639] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 08:56:05,718 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:05,721 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:05,722 INFO [Listener at localhost/33639] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40163 connecting to ZooKeeper ensemble=127.0.0.1:54109 2023-06-03 08:56:05,726 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:401630x0, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:56:05,727 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40163-0x1008fe70e730001 connected 2023-06-03 08:56:05,727 DEBUG [Listener at localhost/33639] zookeeper.ZKUtil(164): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:56:05,728 DEBUG [Listener at localhost/33639] zookeeper.ZKUtil(164): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:56:05,729 DEBUG [Listener at localhost/33639] zookeeper.ZKUtil(164): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:56:05,729 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40163 2023-06-03 08:56:05,729 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40163 2023-06-03 08:56:05,730 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40163 2023-06-03 08:56:05,730 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40163 2023-06-03 08:56:05,730 DEBUG [Listener at localhost/33639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40163 2023-06-03 08:56:05,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:05,741 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:56:05,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:05,761 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:56:05,761 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:56:05,761 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:05,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:56:05,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44765,1685782564531 from backup master directory 2023-06-03 08:56:05,764 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:56:05,766 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:05,766 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:56:05,767 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:56:05,767 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:05,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-03 08:56:05,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-03 08:56:05,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase.id with ID: 4471a06a-e313-4e54-9301-571f5175d133 2023-06-03 08:56:05,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:05,911 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:05,953 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1b5cc2be to 127.0.0.1:54109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:56:05,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51c30fce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:56:06,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:56:06,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-03 08:56:06,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:56:06,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store-tmp 2023-06-03 08:56:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:56:06,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:56:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:56:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:56:06,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:56:06,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:56:06,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:56:06,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/WALs/jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:06,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44765%2C1685782564531, suffix=, logDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/WALs/jenkins-hbase4.apache.org,44765,1685782564531, archiveDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/oldWALs, maxLogs=10 2023-06-03 08:56:06,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:56:06,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/WALs/jenkins-hbase4.apache.org,44765,1685782564531/jenkins-hbase4.apache.org%2C44765%2C1685782564531.1685782566151 2023-06-03 08:56:06,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK], DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK]] 2023-06-03 08:56:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:56:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:06,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:56:06,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:56:06,236 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:56:06,243 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-03 08:56:06,265 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-03 08:56:06,277 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:06,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:56:06,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:56:06,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:56:06,305 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:56:06,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=789995, jitterRate=0.004531428217887878}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:56:06,306 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:56:06,307 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-03 08:56:06,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-03 08:56:06,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-03 08:56:06,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-03 08:56:06,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-03 08:56:06,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 34 msec 2023-06-03 08:56:06,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-03 08:56:06,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-03 08:56:06,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-03 08:56:06,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-03 08:56:06,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-03 08:56:06,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-03 08:56:06,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-03 08:56:06,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-03 08:56:06,439 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:06,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-03 08:56:06,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-03 08:56:06,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-03 08:56:06,457 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:56:06,457 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:56:06,457 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:06,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44765,1685782564531, sessionid=0x1008fe70e730000, setting cluster-up flag (Was=false) 2023-06-03 08:56:06,471 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:06,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-03 08:56:06,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:06,483 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:06,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-03 08:56:06,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:06,491 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.hbase-snapshot/.tmp 2023-06-03 08:56:06,534 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(951): ClusterId : 4471a06a-e313-4e54-9301-571f5175d133 2023-06-03 08:56:06,537 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 08:56:06,543 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 08:56:06,543 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 08:56:06,546 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 08:56:06,547 DEBUG [RS:0;jenkins-hbase4:40163] zookeeper.ReadOnlyZKClient(139): Connect 0x58d714d6 to 127.0.0.1:54109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:56:06,551 DEBUG [RS:0;jenkins-hbase4:40163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5610e1bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:56:06,552 DEBUG [RS:0;jenkins-hbase4:40163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@522f7fa6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:56:06,574 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40163 2023-06-03 08:56:06,578 INFO [RS:0;jenkins-hbase4:40163] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 08:56:06,578 INFO [RS:0;jenkins-hbase4:40163] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 08:56:06,578 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 08:56:06,580 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44765,1685782564531 with isa=jenkins-hbase4.apache.org/172.31.14.131:40163, startcode=1685782565698 2023-06-03 08:56:06,597 DEBUG [RS:0;jenkins-hbase4:40163] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 08:56:06,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-03 08:56:06,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:56:06,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:56:06,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:56:06,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:56:06,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-03 08:56:06,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:56:06,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685782596627 2023-06-03 08:56:06,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-03 08:56:06,633 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:56:06,633 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-03 08:56:06,639 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:56:06,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-03 08:56:06,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-03 08:56:06,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-03 08:56:06,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-03 08:56:06,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-03 08:56:06,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-03 08:56:06,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-03 08:56:06,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-03 08:56:06,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-03 08:56:06,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-03 08:56:06,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782566675,5,FailOnTimeoutGroup] 2023-06-03 08:56:06,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782566677,5,FailOnTimeoutGroup] 2023-06-03 08:56:06,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-03 08:56:06,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,690 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:56:06,692 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:56:06,692 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f 2023-06-03 08:56:06,714 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:06,718 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:56:06,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/info 2023-06-03 08:56:06,723 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:56:06,727 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:06,727 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:56:06,730 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:56:06,731 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:56:06,731 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53265, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 08:56:06,732 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:06,732 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:56:06,735 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/table 2023-06-03 08:56:06,735 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:56:06,736 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:06,738 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740 2023-06-03 08:56:06,739 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740 2023-06-03 08:56:06,743 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:56:06,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:06,745 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:56:06,749 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:56:06,750 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828898, jitterRate=0.05399952828884125}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:56:06,750 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:56:06,750 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:56:06,750 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:56:06,750 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:56:06,750 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:56:06,750 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:56:06,751 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 08:56:06,752 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:56:06,757 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:56:06,757 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-03 08:56:06,765 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f 2023-06-03 08:56:06,766 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36003 2023-06-03 08:56:06,766 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 08:56:06,767 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-03 08:56:06,771 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:56:06,772 DEBUG [RS:0;jenkins-hbase4:40163] zookeeper.ZKUtil(162): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:06,772 WARN [RS:0;jenkins-hbase4:40163] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:56:06,773 INFO [RS:0;jenkins-hbase4:40163] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:56:06,773 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1946): logDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:06,773 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40163,1685782565698] 2023-06-03 08:56:06,783 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-03 08:56:06,785 DEBUG [RS:0;jenkins-hbase4:40163] zookeeper.ZKUtil(162): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:06,785 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-03 08:56:06,796 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 08:56:06,805 INFO [RS:0;jenkins-hbase4:40163] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 08:56:06,823 INFO [RS:0;jenkins-hbase4:40163] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 08:56:06,826 INFO [RS:0;jenkins-hbase4:40163] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 08:56:06,826 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,827 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 08:56:06,834 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,834 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,834 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,834 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,834 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,835 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,835 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:56:06,835 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,835 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,835 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,835 DEBUG [RS:0;jenkins-hbase4:40163] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:56:06,836 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,836 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,836 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,853 INFO [RS:0;jenkins-hbase4:40163] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 08:56:06,856 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40163,1685782565698-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:06,871 INFO [RS:0;jenkins-hbase4:40163] regionserver.Replication(203): jenkins-hbase4.apache.org,40163,1685782565698 started 2023-06-03 08:56:06,872 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40163,1685782565698, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40163, sessionid=0x1008fe70e730001 2023-06-03 08:56:06,872 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 08:56:06,872 DEBUG [RS:0;jenkins-hbase4:40163] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:06,872 DEBUG [RS:0;jenkins-hbase4:40163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40163,1685782565698' 2023-06-03 08:56:06,872 DEBUG [RS:0;jenkins-hbase4:40163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:56:06,873 DEBUG [RS:0;jenkins-hbase4:40163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:56:06,873 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 08:56:06,873 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 08:56:06,873 DEBUG [RS:0;jenkins-hbase4:40163] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:06,873 DEBUG [RS:0;jenkins-hbase4:40163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40163,1685782565698' 2023-06-03 08:56:06,873 DEBUG [RS:0;jenkins-hbase4:40163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 08:56:06,874 DEBUG [RS:0;jenkins-hbase4:40163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 08:56:06,874 DEBUG [RS:0;jenkins-hbase4:40163] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 08:56:06,874 INFO [RS:0;jenkins-hbase4:40163] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 08:56:06,874 INFO [RS:0;jenkins-hbase4:40163] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 08:56:06,937 DEBUG [jenkins-hbase4:44765] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-03 08:56:06,940 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40163,1685782565698, state=OPENING 2023-06-03 08:56:06,948 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-03 08:56:06,950 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:06,950 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:56:06,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40163,1685782565698}] 2023-06-03 08:56:06,985 INFO [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40163%2C1685782565698, suffix=, logDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698, archiveDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/oldWALs, maxLogs=32 2023-06-03 08:56:06,998 INFO [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782566988 2023-06-03 08:56:06,998 DEBUG [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:07,137 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:07,139 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-03 08:56:07,143 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58092, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-03 08:56:07,156 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-03 08:56:07,156 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:56:07,160 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40163%2C1685782565698.meta, suffix=.meta, logDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698, archiveDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/oldWALs, maxLogs=32 2023-06-03 08:56:07,174 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.meta.1685782567161.meta 2023-06-03 08:56:07,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK], DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK]] 2023-06-03 08:56:07,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:56:07,176 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-03 08:56:07,192 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-03 08:56:07,197 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-03 08:56:07,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-03 08:56:07,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:07,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-03 08:56:07,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-03 08:56:07,205 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:56:07,207 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/info 2023-06-03 08:56:07,207 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/info 2023-06-03 08:56:07,207 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:56:07,208 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:07,208 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:56:07,209 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:56:07,210 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:56:07,210 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:56:07,211 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:07,211 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:56:07,213 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/table 2023-06-03 08:56:07,213 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/table 2023-06-03 08:56:07,213 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:56:07,214 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:07,215 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740 2023-06-03 08:56:07,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740 2023-06-03 08:56:07,222 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:56:07,224 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:56:07,226 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=739093, jitterRate=-0.060194820165634155}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:56:07,226 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:56:07,235 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685782567129 2023-06-03 08:56:07,251 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-03 08:56:07,252 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-03 08:56:07,252 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40163,1685782565698, state=OPEN 2023-06-03 08:56:07,255 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-03 08:56:07,255 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:56:07,260 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-03 08:56:07,260 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40163,1685782565698 in 301 msec 2023-06-03 08:56:07,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-03 08:56:07,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 494 msec 2023-06-03 08:56:07,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 736 msec 2023-06-03 08:56:07,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685782567271, completionTime=-1 2023-06-03 08:56:07,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-03 08:56:07,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-03 08:56:07,331 DEBUG [hconnection-0x1fd34855-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:56:07,334 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58106, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:56:07,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-03 08:56:07,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685782627352 2023-06-03 08:56:07,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685782687352 2023-06-03 08:56:07,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 80 msec 2023-06-03 08:56:07,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44765,1685782564531-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:07,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44765,1685782564531-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:07,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44765,1685782564531-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:07,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44765, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:07,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-03 08:56:07,385 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-03 08:56:07,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-03 08:56:07,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:56:07,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-03 08:56:07,408 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:56:07,411 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:56:07,435 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,437 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0 empty. 2023-06-03 08:56:07,438 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,438 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-03 08:56:07,495 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-03 08:56:07,497 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ac55528f6c11dd67977b18755ba40de0, NAME => 'hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp 2023-06-03 08:56:07,513 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:07,513 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ac55528f6c11dd67977b18755ba40de0, disabling compactions & flushes 2023-06-03 08:56:07,513 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,513 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,513 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. after waiting 0 ms 2023-06-03 08:56:07,513 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,514 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ac55528f6c11dd67977b18755ba40de0: 2023-06-03 08:56:07,518 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:56:07,533 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782567521"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782567521"}]},"ts":"1685782567521"} 2023-06-03 08:56:07,560 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:56:07,562 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:56:07,566 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782567562"}]},"ts":"1685782567562"} 2023-06-03 08:56:07,570 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-03 08:56:07,579 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ac55528f6c11dd67977b18755ba40de0, ASSIGN}] 2023-06-03 08:56:07,581 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ac55528f6c11dd67977b18755ba40de0, ASSIGN 2023-06-03 08:56:07,583 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ac55528f6c11dd67977b18755ba40de0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40163,1685782565698; forceNewPlan=false, retain=false 2023-06-03 08:56:07,734 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ac55528f6c11dd67977b18755ba40de0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:07,734 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782567734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782567734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782567734"}]},"ts":"1685782567734"} 2023-06-03 08:56:07,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure ac55528f6c11dd67977b18755ba40de0, server=jenkins-hbase4.apache.org,40163,1685782565698}] 2023-06-03 08:56:07,900 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ac55528f6c11dd67977b18755ba40de0, NAME => 'hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:56:07,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:07,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,905 INFO [StoreOpener-ac55528f6c11dd67977b18755ba40de0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,907 DEBUG [StoreOpener-ac55528f6c11dd67977b18755ba40de0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/info 2023-06-03 08:56:07,907 DEBUG [StoreOpener-ac55528f6c11dd67977b18755ba40de0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/info 2023-06-03 08:56:07,908 INFO [StoreOpener-ac55528f6c11dd67977b18755ba40de0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ac55528f6c11dd67977b18755ba40de0 columnFamilyName info 2023-06-03 08:56:07,908 INFO [StoreOpener-ac55528f6c11dd67977b18755ba40de0-1] regionserver.HStore(310): Store=ac55528f6c11dd67977b18755ba40de0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:07,910 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:56:07,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:56:07,919 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ac55528f6c11dd67977b18755ba40de0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=777232, jitterRate=-0.011698737740516663}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:56:07,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ac55528f6c11dd67977b18755ba40de0: 2023-06-03 08:56:07,922 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0., pid=6, masterSystemTime=1685782567893 2023-06-03 08:56:07,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,926 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:56:07,927 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ac55528f6c11dd67977b18755ba40de0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:07,928 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782567927"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782567927"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782567927"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782567927"}]},"ts":"1685782567927"} 2023-06-03 08:56:07,936 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-03 08:56:07,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure ac55528f6c11dd67977b18755ba40de0, server=jenkins-hbase4.apache.org,40163,1685782565698 in 194 msec 2023-06-03 08:56:07,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-03 08:56:07,941 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ac55528f6c11dd67977b18755ba40de0, ASSIGN in 358 msec 2023-06-03 08:56:07,942 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:56:07,942 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782567942"}]},"ts":"1685782567942"} 2023-06-03 08:56:07,945 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-03 08:56:07,950 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:56:07,952 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 554 msec 2023-06-03 08:56:08,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-03 08:56:08,010 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:56:08,010 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:08,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-03 08:56:08,071 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:56:08,077 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 36 msec 2023-06-03 08:56:08,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-03 08:56:08,099 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:56:08,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-06-03 08:56:08,111 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-03 08:56:08,114 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-03 08:56:08,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.347sec 2023-06-03 08:56:08,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-03 08:56:08,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-03 08:56:08,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-03 08:56:08,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44765,1685782564531-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-03 08:56:08,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44765,1685782564531-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-03 08:56:08,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-03 08:56:08,139 DEBUG [Listener at localhost/33639] zookeeper.ReadOnlyZKClient(139): Connect 0x1ffb5227 to 127.0.0.1:54109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:56:08,143 DEBUG [Listener at localhost/33639] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@582b1a33, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:56:08,157 DEBUG [hconnection-0x3843ca49-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:56:08,169 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58114, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:56:08,182 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:56:08,182 INFO [Listener at localhost/33639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:56:08,193 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-03 08:56:08,193 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:56:08,194 INFO [Listener at localhost/33639] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-03 08:56:08,204 DEBUG [Listener at localhost/33639] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-03 08:56:08,208 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42742, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-03 08:56:08,217 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-03 08:56:08,217 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-03 08:56:08,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:56:08,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-03 08:56:08,225 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:56:08,227 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:56:08,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-03 08:56:08,231 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,232 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4 empty. 2023-06-03 08:56:08,234 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,234 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-03 08:56:08,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:56:08,257 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-03 08:56:08,259 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => e49c712a199dfd24d15069ffe6ca69b4, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/.tmp 2023-06-03 08:56:08,275 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:08,275 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing e49c712a199dfd24d15069ffe6ca69b4, disabling compactions & flushes 2023-06-03 08:56:08,275 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,275 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,275 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. after waiting 0 ms 2023-06-03 08:56:08,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,276 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:56:08,280 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:56:08,282 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685782568281"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782568281"}]},"ts":"1685782568281"} 2023-06-03 08:56:08,286 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:56:08,288 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:56:08,288 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782568288"}]},"ts":"1685782568288"} 2023-06-03 08:56:08,291 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-03 08:56:08,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=e49c712a199dfd24d15069ffe6ca69b4, ASSIGN}] 2023-06-03 08:56:08,297 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=e49c712a199dfd24d15069ffe6ca69b4, ASSIGN 2023-06-03 08:56:08,298 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=e49c712a199dfd24d15069ffe6ca69b4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40163,1685782565698; forceNewPlan=false, retain=false 2023-06-03 08:56:08,450 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=e49c712a199dfd24d15069ffe6ca69b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:08,450 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685782568450"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782568450"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782568450"}]},"ts":"1685782568450"} 2023-06-03 08:56:08,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure e49c712a199dfd24d15069ffe6ca69b4, server=jenkins-hbase4.apache.org,40163,1685782565698}] 2023-06-03 08:56:08,613 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e49c712a199dfd24d15069ffe6ca69b4, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:56:08,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:56:08,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,619 INFO [StoreOpener-e49c712a199dfd24d15069ffe6ca69b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,621 DEBUG [StoreOpener-e49c712a199dfd24d15069ffe6ca69b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info 2023-06-03 08:56:08,621 DEBUG [StoreOpener-e49c712a199dfd24d15069ffe6ca69b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info 2023-06-03 08:56:08,622 INFO [StoreOpener-e49c712a199dfd24d15069ffe6ca69b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e49c712a199dfd24d15069ffe6ca69b4 columnFamilyName info 2023-06-03 08:56:08,623 INFO [StoreOpener-e49c712a199dfd24d15069ffe6ca69b4-1] regionserver.HStore(310): Store=e49c712a199dfd24d15069ffe6ca69b4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:56:08,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:08,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:56:08,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e49c712a199dfd24d15069ffe6ca69b4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828202, jitterRate=0.0531139075756073}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:56:08,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:56:08,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4., pid=11, masterSystemTime=1685782568607 2023-06-03 08:56:08,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,640 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:08,641 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=e49c712a199dfd24d15069ffe6ca69b4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:56:08,641 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685782568640"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782568640"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782568640"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782568640"}]},"ts":"1685782568640"} 2023-06-03 08:56:08,648 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-03 08:56:08,648 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure e49c712a199dfd24d15069ffe6ca69b4, server=jenkins-hbase4.apache.org,40163,1685782565698 in 191 msec 2023-06-03 08:56:08,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-03 08:56:08,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=e49c712a199dfd24d15069ffe6ca69b4, ASSIGN in 354 msec 2023-06-03 08:56:08,653 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:56:08,654 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782568653"}]},"ts":"1685782568653"} 2023-06-03 08:56:08,656 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-03 08:56:08,659 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:56:08,662 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 438 msec 2023-06-03 08:56:12,712 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-03 08:56:12,802 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-03 08:56:12,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-03 08:56:12,805 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-03 08:56:14,681 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-03 08:56:14,682 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-03 08:56:18,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44765] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:56:18,249 INFO [Listener at localhost/33639] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-03 08:56:18,252 DEBUG [Listener at localhost/33639] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-03 08:56:18,253 DEBUG [Listener at localhost/33639] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:56:30,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40163] regionserver.HRegion(9158): Flush requested on e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:30,282 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e49c712a199dfd24d15069ffe6ca69b4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 08:56:30,355 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/1993fb802e284acf86f8fff16b196dff 2023-06-03 08:56:30,397 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/1993fb802e284acf86f8fff16b196dff as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff 2023-06-03 08:56:30,411 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff, entries=7, sequenceid=11, filesize=12.1 K 2023-06-03 08:56:30,414 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for e49c712a199dfd24d15069ffe6ca69b4 in 132ms, sequenceid=11, compaction requested=false 2023-06-03 08:56:30,415 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:56:38,494 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:40,697 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:42,900 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:45,103 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:45,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40163] regionserver.HRegion(9158): Flush requested on e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:56:45,103 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e49c712a199dfd24d15069ffe6ca69b4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 08:56:45,305 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:45,323 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/4fbb775501fa48eaa6a99cbce5eb53bb 2023-06-03 08:56:45,331 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/4fbb775501fa48eaa6a99cbce5eb53bb as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/4fbb775501fa48eaa6a99cbce5eb53bb 2023-06-03 08:56:45,342 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/4fbb775501fa48eaa6a99cbce5eb53bb, entries=7, sequenceid=21, filesize=12.1 K 2023-06-03 08:56:45,543 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:45,544 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for e49c712a199dfd24d15069ffe6ca69b4 in 440ms, sequenceid=21, compaction requested=false 2023-06-03 08:56:45,544 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:56:45,544 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-03 08:56:45,544 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 08:56:45,545 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff because midkey is the same as first or last row 2023-06-03 08:56:47,306 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:49,509 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:49,510 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C40163%2C1685782565698:(num 1685782566988) roll requested 2023-06-03 08:56:49,510 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:49,725 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:49,726 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782566988 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782609510 2023-06-03 08:56:49,727 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:56:49,727 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782566988 is not closed yet, will try archiving it next time 2023-06-03 08:56:59,522 INFO [Listener at localhost/33639] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-03 08:57:04,525 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:04,525 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:04,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40163] regionserver.HRegion(9158): Flush requested on e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:57:04,525 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C40163%2C1685782565698:(num 1685782609510) roll requested 2023-06-03 08:57:04,525 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e49c712a199dfd24d15069ffe6ca69b4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 08:57:06,526 INFO [Listener at localhost/33639] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-03 08:57:09,527 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:09,527 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:09,538 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:09,538 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:09,540 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782609510 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782624525 2023-06-03 08:57:09,540 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c09e337e-6433-4076-9464-194b414e324f,DISK], DatanodeInfoWithStorage[127.0.0.1:38615,DS-76da6fbd-5f9b-48fd-bf68-1aacdd5f4cc9,DISK]] 2023-06-03 08:57:09,540 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698/jenkins-hbase4.apache.org%2C40163%2C1685782565698.1685782609510 is not closed yet, will try archiving it next time 2023-06-03 08:57:09,544 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/fc01dc6c89f245b99c06cfa4a27ab41f 2023-06-03 08:57:09,554 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/fc01dc6c89f245b99c06cfa4a27ab41f as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fc01dc6c89f245b99c06cfa4a27ab41f 2023-06-03 08:57:09,562 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fc01dc6c89f245b99c06cfa4a27ab41f, entries=7, sequenceid=31, filesize=12.1 K 2023-06-03 08:57:09,564 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for e49c712a199dfd24d15069ffe6ca69b4 in 5039ms, sequenceid=31, compaction requested=true 2023-06-03 08:57:09,565 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:57:09,565 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-03 08:57:09,565 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 08:57:09,565 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff because midkey is the same as first or last row 2023-06-03 08:57:09,566 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 08:57:09,567 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 08:57:09,570 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 08:57:09,573 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.HStore(1912): e49c712a199dfd24d15069ffe6ca69b4/info is initiating minor compaction (all files) 2023-06-03 08:57:09,573 INFO [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e49c712a199dfd24d15069ffe6ca69b4/info in TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:57:09,573 INFO [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/4fbb775501fa48eaa6a99cbce5eb53bb, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fc01dc6c89f245b99c06cfa4a27ab41f] into tmpdir=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp, totalSize=36.3 K 2023-06-03 08:57:09,574 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] compactions.Compactor(207): Compacting 1993fb802e284acf86f8fff16b196dff, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685782578258 2023-06-03 08:57:09,575 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] compactions.Compactor(207): Compacting 4fbb775501fa48eaa6a99cbce5eb53bb, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685782592283 2023-06-03 08:57:09,576 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] compactions.Compactor(207): Compacting fc01dc6c89f245b99c06cfa4a27ab41f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685782607104 2023-06-03 08:57:09,601 INFO [RS:0;jenkins-hbase4:40163-shortCompactions-0] throttle.PressureAwareThroughputController(145): e49c712a199dfd24d15069ffe6ca69b4#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 08:57:09,624 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/fdd68d9afe654a4b911fab3ae4a40d97 as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fdd68d9afe654a4b911fab3ae4a40d97 2023-06-03 08:57:09,640 INFO [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e49c712a199dfd24d15069ffe6ca69b4/info of e49c712a199dfd24d15069ffe6ca69b4 into fdd68d9afe654a4b911fab3ae4a40d97(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 08:57:09,640 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:57:09,640 INFO [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4., storeName=e49c712a199dfd24d15069ffe6ca69b4/info, priority=13, startTime=1685782629566; duration=0sec 2023-06-03 08:57:09,641 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-03 08:57:09,641 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 08:57:09,642 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fdd68d9afe654a4b911fab3ae4a40d97 because midkey is the same as first or last row 2023-06-03 08:57:09,642 DEBUG [RS:0;jenkins-hbase4:40163-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 08:57:21,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40163] regionserver.HRegion(9158): Flush requested on e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:57:21,646 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e49c712a199dfd24d15069ffe6ca69b4 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 08:57:21,663 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/11391047916c49abb130c836da20e99d 2023-06-03 08:57:21,671 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/11391047916c49abb130c836da20e99d as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/11391047916c49abb130c836da20e99d 2023-06-03 08:57:21,678 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/11391047916c49abb130c836da20e99d, entries=7, sequenceid=42, filesize=12.1 K 2023-06-03 08:57:21,679 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for e49c712a199dfd24d15069ffe6ca69b4 in 33ms, sequenceid=42, compaction requested=false 2023-06-03 08:57:21,679 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:57:21,679 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-03 08:57:21,679 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 08:57:21,680 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fdd68d9afe654a4b911fab3ae4a40d97 because midkey is the same as first or last row 2023-06-03 08:57:29,655 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-03 08:57:29,657 INFO [Listener at localhost/33639] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-03 08:57:29,657 DEBUG [Listener at localhost/33639] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ffb5227 to 127.0.0.1:54109 2023-06-03 08:57:29,657 DEBUG [Listener at localhost/33639] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:57:29,658 DEBUG [Listener at localhost/33639] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-03 08:57:29,658 DEBUG [Listener at localhost/33639] util.JVMClusterUtil(257): Found active master hash=612685945, stopped=false 2023-06-03 08:57:29,658 INFO [Listener at localhost/33639] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:57:29,661 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:57:29,661 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:57:29,661 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:29,661 INFO [Listener at localhost/33639] procedure2.ProcedureExecutor(629): Stopping 2023-06-03 08:57:29,662 DEBUG [Listener at localhost/33639] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b5cc2be to 127.0.0.1:54109 2023-06-03 08:57:29,662 DEBUG [Listener at localhost/33639] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:57:29,662 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:57:29,662 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:57:29,662 INFO [Listener at localhost/33639] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,40163,1685782565698' ***** 2023-06-03 08:57:29,662 INFO [Listener at localhost/33639] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 08:57:29,663 INFO [RS:0;jenkins-hbase4:40163] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 08:57:29,663 INFO [RS:0;jenkins-hbase4:40163] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 08:57:29,663 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 08:57:29,663 INFO [RS:0;jenkins-hbase4:40163] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 08:57:29,663 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(3303): Received CLOSE for e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:57:29,664 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(3303): Received CLOSE for ac55528f6c11dd67977b18755ba40de0 2023-06-03 08:57:29,665 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:57:29,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e49c712a199dfd24d15069ffe6ca69b4, disabling compactions & flushes 2023-06-03 08:57:29,665 DEBUG [RS:0;jenkins-hbase4:40163] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58d714d6 to 127.0.0.1:54109 2023-06-03 08:57:29,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:57:29,665 DEBUG [RS:0;jenkins-hbase4:40163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:57:29,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:57:29,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. after waiting 0 ms 2023-06-03 08:57:29,665 INFO [RS:0;jenkins-hbase4:40163] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 08:57:29,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:57:29,665 INFO [RS:0;jenkins-hbase4:40163] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 08:57:29,665 INFO [RS:0;jenkins-hbase4:40163] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 08:57:29,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e49c712a199dfd24d15069ffe6ca69b4 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-03 08:57:29,665 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 08:57:29,666 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-03 08:57:29,666 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1478): Online Regions={e49c712a199dfd24d15069ffe6ca69b4=TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4., ac55528f6c11dd67977b18755ba40de0=hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0., 1588230740=hbase:meta,,1.1588230740} 2023-06-03 08:57:29,667 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:57:29,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:57:29,667 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:57:29,667 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:57:29,667 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:57:29,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-03 08:57:29,668 DEBUG [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1504): Waiting on 1588230740, ac55528f6c11dd67977b18755ba40de0, e49c712a199dfd24d15069ffe6ca69b4 2023-06-03 08:57:29,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/56f6929b13a54bafbbb22b9667ea1bb3 2023-06-03 08:57:29,695 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/.tmp/info/5a38229697f4434faf93810a0b12b884 2023-06-03 08:57:29,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/.tmp/info/56f6929b13a54bafbbb22b9667ea1bb3 as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/56f6929b13a54bafbbb22b9667ea1bb3 2023-06-03 08:57:29,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/56f6929b13a54bafbbb22b9667ea1bb3, entries=3, sequenceid=48, filesize=7.9 K 2023-06-03 08:57:29,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for e49c712a199dfd24d15069ffe6ca69b4 in 56ms, sequenceid=48, compaction requested=true 2023-06-03 08:57:29,725 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/4fbb775501fa48eaa6a99cbce5eb53bb, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fc01dc6c89f245b99c06cfa4a27ab41f] to archive 2023-06-03 08:57:29,726 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/.tmp/table/db1f0ab7a8874fad96e1292284e79dad 2023-06-03 08:57:29,726 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-03 08:57:29,732 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/archive/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/1993fb802e284acf86f8fff16b196dff 2023-06-03 08:57:29,735 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/4fbb775501fa48eaa6a99cbce5eb53bb to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/archive/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/4fbb775501fa48eaa6a99cbce5eb53bb 2023-06-03 08:57:29,735 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/.tmp/info/5a38229697f4434faf93810a0b12b884 as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/info/5a38229697f4434faf93810a0b12b884 2023-06-03 08:57:29,737 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fc01dc6c89f245b99c06cfa4a27ab41f to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/archive/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/info/fc01dc6c89f245b99c06cfa4a27ab41f 2023-06-03 08:57:29,743 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/info/5a38229697f4434faf93810a0b12b884, entries=20, sequenceid=14, filesize=7.4 K 2023-06-03 08:57:29,744 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/.tmp/table/db1f0ab7a8874fad96e1292284e79dad as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/table/db1f0ab7a8874fad96e1292284e79dad 2023-06-03 08:57:29,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/table/db1f0ab7a8874fad96e1292284e79dad, entries=4, sequenceid=14, filesize=4.8 K 2023-06-03 08:57:29,752 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 85ms, sequenceid=14, compaction requested=false 2023-06-03 08:57:29,772 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-03 08:57:29,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-03 08:57:29,775 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 08:57:29,775 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:57:29,775 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-03 08:57:29,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/default/TestLogRolling-testSlowSyncLogRolling/e49c712a199dfd24d15069ffe6ca69b4/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-03 08:57:29,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:57:29,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e49c712a199dfd24d15069ffe6ca69b4: 2023-06-03 08:57:29,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685782568217.e49c712a199dfd24d15069ffe6ca69b4. 2023-06-03 08:57:29,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ac55528f6c11dd67977b18755ba40de0, disabling compactions & flushes 2023-06-03 08:57:29,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:57:29,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:57:29,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. after waiting 0 ms 2023-06-03 08:57:29,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:57:29,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ac55528f6c11dd67977b18755ba40de0 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-03 08:57:29,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/.tmp/info/455e02fb3f394d3a90e9f725b0021ec3 2023-06-03 08:57:29,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/.tmp/info/455e02fb3f394d3a90e9f725b0021ec3 as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/info/455e02fb3f394d3a90e9f725b0021ec3 2023-06-03 08:57:29,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/info/455e02fb3f394d3a90e9f725b0021ec3, entries=2, sequenceid=6, filesize=4.8 K 2023-06-03 08:57:29,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for ac55528f6c11dd67977b18755ba40de0 in 38ms, sequenceid=6, compaction requested=false 2023-06-03 08:57:29,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/data/hbase/namespace/ac55528f6c11dd67977b18755ba40de0/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-03 08:57:29,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:57:29,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ac55528f6c11dd67977b18755ba40de0: 2023-06-03 08:57:29,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685782567394.ac55528f6c11dd67977b18755ba40de0. 2023-06-03 08:57:29,868 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40163,1685782565698; all regions closed. 2023-06-03 08:57:29,877 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:57:29,887 DEBUG [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/oldWALs 2023-06-03 08:57:29,887 INFO [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C40163%2C1685782565698.meta:.meta(num 1685782567161) 2023-06-03 08:57:29,887 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/WALs/jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:57:29,900 DEBUG [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/oldWALs 2023-06-03 08:57:29,900 INFO [RS:0;jenkins-hbase4:40163] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C40163%2C1685782565698:(num 1685782624525) 2023-06-03 08:57:29,900 DEBUG [RS:0;jenkins-hbase4:40163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:57:29,900 INFO [RS:0;jenkins-hbase4:40163] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:57:29,901 INFO [RS:0;jenkins-hbase4:40163] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-03 08:57:29,901 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:57:29,902 INFO [RS:0;jenkins-hbase4:40163] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40163 2023-06-03 08:57:29,908 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40163,1685782565698 2023-06-03 08:57:29,908 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:57:29,908 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:57:29,910 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40163,1685782565698] 2023-06-03 08:57:29,910 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40163,1685782565698; numProcessing=1 2023-06-03 08:57:29,915 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40163,1685782565698 already deleted, retry=false 2023-06-03 08:57:29,915 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40163,1685782565698 expired; onlineServers=0 2023-06-03 08:57:29,916 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44765,1685782564531' ***** 2023-06-03 08:57:29,916 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-03 08:57:29,916 DEBUG [M:0;jenkins-hbase4:44765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65b9d242, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:57:29,916 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:57:29,916 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44765,1685782564531; all regions closed. 2023-06-03 08:57:29,916 DEBUG [M:0;jenkins-hbase4:44765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:57:29,916 DEBUG [M:0;jenkins-hbase4:44765] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-03 08:57:29,916 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-03 08:57:29,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782566677] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782566677,5,FailOnTimeoutGroup] 2023-06-03 08:57:29,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782566675] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782566675,5,FailOnTimeoutGroup] 2023-06-03 08:57:29,917 DEBUG [M:0;jenkins-hbase4:44765] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-03 08:57:29,919 INFO [M:0;jenkins-hbase4:44765] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-03 08:57:29,919 INFO [M:0;jenkins-hbase4:44765] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-03 08:57:29,919 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-03 08:57:29,919 INFO [M:0;jenkins-hbase4:44765] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-03 08:57:29,919 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:29,919 DEBUG [M:0;jenkins-hbase4:44765] master.HMaster(1512): Stopping service threads 2023-06-03 08:57:29,920 INFO [M:0;jenkins-hbase4:44765] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-03 08:57:29,920 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:57:29,920 INFO [M:0;jenkins-hbase4:44765] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-03 08:57:29,921 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-03 08:57:29,921 DEBUG [M:0;jenkins-hbase4:44765] zookeeper.ZKUtil(398): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-03 08:57:29,921 WARN [M:0;jenkins-hbase4:44765] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-03 08:57:29,921 INFO [M:0;jenkins-hbase4:44765] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-03 08:57:29,921 INFO [M:0;jenkins-hbase4:44765] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-03 08:57:29,922 DEBUG [M:0;jenkins-hbase4:44765] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:57:29,922 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:29,922 DEBUG [M:0;jenkins-hbase4:44765] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:29,922 DEBUG [M:0;jenkins-hbase4:44765] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:57:29,922 DEBUG [M:0;jenkins-hbase4:44765] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:29,922 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.28 KB heapSize=46.71 KB 2023-06-03 08:57:29,944 INFO [M:0;jenkins-hbase4:44765] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.28 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9b2d31cc25c24bd885a0bf4194418bf0 2023-06-03 08:57:29,950 INFO [M:0;jenkins-hbase4:44765] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9b2d31cc25c24bd885a0bf4194418bf0 2023-06-03 08:57:29,951 DEBUG [M:0;jenkins-hbase4:44765] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9b2d31cc25c24bd885a0bf4194418bf0 as hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9b2d31cc25c24bd885a0bf4194418bf0 2023-06-03 08:57:29,957 INFO [M:0;jenkins-hbase4:44765] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9b2d31cc25c24bd885a0bf4194418bf0 2023-06-03 08:57:29,957 INFO [M:0;jenkins-hbase4:44765] regionserver.HStore(1080): Added hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9b2d31cc25c24bd885a0bf4194418bf0, entries=11, sequenceid=100, filesize=6.1 K 2023-06-03 08:57:29,958 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegion(2948): Finished flush of dataSize ~38.28 KB/39196, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 36ms, sequenceid=100, compaction requested=false 2023-06-03 08:57:29,959 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:29,960 DEBUG [M:0;jenkins-hbase4:44765] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:57:29,960 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/MasterData/WALs/jenkins-hbase4.apache.org,44765,1685782564531 2023-06-03 08:57:29,964 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:57:29,964 INFO [M:0;jenkins-hbase4:44765] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-03 08:57:29,965 INFO [M:0;jenkins-hbase4:44765] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44765 2023-06-03 08:57:29,967 DEBUG [M:0;jenkins-hbase4:44765] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44765,1685782564531 already deleted, retry=false 2023-06-03 08:57:30,010 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:57:30,010 INFO [RS:0;jenkins-hbase4:40163] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40163,1685782565698; zookeeper connection closed. 2023-06-03 08:57:30,010 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): regionserver:40163-0x1008fe70e730001, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:57:30,011 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5801ff06] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5801ff06 2023-06-03 08:57:30,011 INFO [Listener at localhost/33639] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-03 08:57:30,111 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:57:30,111 INFO [M:0;jenkins-hbase4:44765] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44765,1685782564531; zookeeper connection closed. 2023-06-03 08:57:30,111 DEBUG [Listener at localhost/33639-EventThread] zookeeper.ZKWatcher(600): master:44765-0x1008fe70e730000, quorum=127.0.0.1:54109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:57:30,112 WARN [Listener at localhost/33639] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:57:30,117 INFO [Listener at localhost/33639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:30,223 WARN [BP-663799318-172.31.14.131-1685782561531 heartbeating to localhost/127.0.0.1:36003] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:57:30,223 WARN [BP-663799318-172.31.14.131-1685782561531 heartbeating to localhost/127.0.0.1:36003] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-663799318-172.31.14.131-1685782561531 (Datanode Uuid 0a444cd6-590e-47e7-b10e-5c07122333ad) service to localhost/127.0.0.1:36003 2023-06-03 08:57:30,225 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/dfs/data/data3/current/BP-663799318-172.31.14.131-1685782561531] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:30,225 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/dfs/data/data4/current/BP-663799318-172.31.14.131-1685782561531] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:30,226 WARN [Listener at localhost/33639] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:57:30,228 INFO [Listener at localhost/33639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:30,331 WARN [BP-663799318-172.31.14.131-1685782561531 heartbeating to localhost/127.0.0.1:36003] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:57:30,331 WARN [BP-663799318-172.31.14.131-1685782561531 heartbeating to localhost/127.0.0.1:36003] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-663799318-172.31.14.131-1685782561531 (Datanode Uuid 7e307dba-ed47-4d57-856c-32b11c4f4ba2) service to localhost/127.0.0.1:36003 2023-06-03 08:57:30,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/dfs/data/data1/current/BP-663799318-172.31.14.131-1685782561531] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:30,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/cluster_5388ee67-f954-377f-03d5-a73ec4f9a140/dfs/data/data2/current/BP-663799318-172.31.14.131-1685782561531] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:30,372 INFO [Listener at localhost/33639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:30,484 INFO [Listener at localhost/33639] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-03 08:57:30,519 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-03 08:57:30,531 INFO [Listener at localhost/33639] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@a51b160 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:36003 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:36003 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:36003 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33639 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:36003 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:36003 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=439 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=111 (was 221), ProcessCount=170 (was 172), AvailableMemoryMB=2141 (was 2667) 2023-06-03 08:57:30,541 INFO [Listener at localhost/33639] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=439, MaxFileDescriptor=60000, SystemLoadAverage=111, ProcessCount=170, AvailableMemoryMB=2140 2023-06-03 08:57:30,541 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-03 08:57:30,541 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/hadoop.log.dir so I do NOT create it in target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85 2023-06-03 08:57:30,541 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89c7d848-4886-938e-2fcb-7811336723d9/hadoop.tmp.dir so I do NOT create it in target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68, deleteOnExit=true 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/test.cache.data in system properties and HBase conf 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/hadoop.tmp.dir in system properties and HBase conf 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/hadoop.log.dir in system properties and HBase conf 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-03 08:57:30,542 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-03 08:57:30,543 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-03 08:57:30,543 DEBUG [Listener at localhost/33639] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-03 08:57:30,543 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:57:30,543 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:57:30,543 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-03 08:57:30,543 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/nfs.dump.dir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir in system properties and HBase conf 2023-06-03 08:57:30,544 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:57:30,545 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-03 08:57:30,545 INFO [Listener at localhost/33639] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-03 08:57:30,546 WARN [Listener at localhost/33639] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:57:30,549 WARN [Listener at localhost/33639] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:57:30,550 WARN [Listener at localhost/33639] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:57:30,597 WARN [Listener at localhost/33639] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:57:30,600 INFO [Listener at localhost/33639] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:57:30,605 INFO [Listener at localhost/33639] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_39143_hdfs____798ir8/webapp 2023-06-03 08:57:30,714 INFO [Listener at localhost/33639] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39143 2023-06-03 08:57:30,715 WARN [Listener at localhost/33639] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:57:30,718 WARN [Listener at localhost/33639] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:57:30,719 WARN [Listener at localhost/33639] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:57:30,761 WARN [Listener at localhost/35767] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:57:30,774 WARN [Listener at localhost/35767] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:57:30,777 WARN [Listener at localhost/35767] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:57:30,779 INFO [Listener at localhost/35767] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:57:30,783 INFO [Listener at localhost/35767] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_45877_datanode____6ut74u/webapp 2023-06-03 08:57:30,840 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:57:30,876 INFO [Listener at localhost/35767] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45877 2023-06-03 08:57:30,887 WARN [Listener at localhost/37065] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:57:30,904 WARN [Listener at localhost/37065] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:57:30,906 WARN [Listener at localhost/37065] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:57:30,908 INFO [Listener at localhost/37065] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:57:30,913 INFO [Listener at localhost/37065] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_38907_datanode____.sfdc7y/webapp 2023-06-03 08:57:31,009 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x941445a221884a89: Processing first storage report for DS-2778efd4-db77-4e44-a349-d3e72ad644df from datanode a15755e5-8f7d-482e-aa83-cda5814d50cc 2023-06-03 08:57:31,009 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x941445a221884a89: from storage DS-2778efd4-db77-4e44-a349-d3e72ad644df node DatanodeRegistration(127.0.0.1:36567, datanodeUuid=a15755e5-8f7d-482e-aa83-cda5814d50cc, infoPort=44007, infoSecurePort=0, ipcPort=37065, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:31,010 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x941445a221884a89: Processing first storage report for DS-bcb3d059-f60a-46fb-8521-4868d180df5d from datanode a15755e5-8f7d-482e-aa83-cda5814d50cc 2023-06-03 08:57:31,010 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x941445a221884a89: from storage DS-bcb3d059-f60a-46fb-8521-4868d180df5d node DatanodeRegistration(127.0.0.1:36567, datanodeUuid=a15755e5-8f7d-482e-aa83-cda5814d50cc, infoPort=44007, infoSecurePort=0, ipcPort=37065, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:31,045 INFO [Listener at localhost/37065] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38907 2023-06-03 08:57:31,054 WARN [Listener at localhost/42185] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:57:31,163 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7eb3a36fe30626cf: Processing first storage report for DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2 from datanode e844c180-729e-49be-ab3e-30ed2e40f87e 2023-06-03 08:57:31,163 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7eb3a36fe30626cf: from storage DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2 node DatanodeRegistration(127.0.0.1:40263, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=35007, infoSecurePort=0, ipcPort=42185, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:31,163 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7eb3a36fe30626cf: Processing first storage report for DS-0e4f015a-7fb1-48f3-8f41-0dc9a40ff132 from datanode e844c180-729e-49be-ab3e-30ed2e40f87e 2023-06-03 08:57:31,163 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7eb3a36fe30626cf: from storage DS-0e4f015a-7fb1-48f3-8f41-0dc9a40ff132 node DatanodeRegistration(127.0.0.1:40263, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=35007, infoSecurePort=0, ipcPort=42185, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:31,174 DEBUG [Listener at localhost/42185] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85 2023-06-03 08:57:31,183 INFO [Listener at localhost/42185] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/zookeeper_0, clientPort=52426, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-03 08:57:31,187 INFO [Listener at localhost/42185] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52426 2023-06-03 08:57:31,188 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,189 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,216 INFO [Listener at localhost/42185] util.FSUtils(471): Created version file at hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225 with version=8 2023-06-03 08:57:31,216 INFO [Listener at localhost/42185] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase-staging 2023-06-03 08:57:31,219 INFO [Listener at localhost/42185] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:57:31,219 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:31,219 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:31,220 INFO [Listener at localhost/42185] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:57:31,220 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:31,220 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:57:31,220 INFO [Listener at localhost/42185] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:57:31,227 INFO [Listener at localhost/42185] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40165 2023-06-03 08:57:31,228 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,229 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,231 INFO [Listener at localhost/42185] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40165 connecting to ZooKeeper ensemble=127.0.0.1:52426 2023-06-03 08:57:31,242 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:401650x0, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:57:31,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40165-0x1008fe8642d0000 connected 2023-06-03 08:57:31,265 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:57:31,265 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:57:31,266 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:57:31,266 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40165 2023-06-03 08:57:31,267 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40165 2023-06-03 08:57:31,270 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40165 2023-06-03 08:57:31,271 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40165 2023-06-03 08:57:31,272 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40165 2023-06-03 08:57:31,272 INFO [Listener at localhost/42185] master.HMaster(444): hbase.rootdir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225, hbase.cluster.distributed=false 2023-06-03 08:57:31,287 INFO [Listener at localhost/42185] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:57:31,287 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:31,288 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:31,288 INFO [Listener at localhost/42185] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:57:31,288 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:31,288 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:57:31,288 INFO [Listener at localhost/42185] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:57:31,289 INFO [Listener at localhost/42185] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37359 2023-06-03 08:57:31,290 INFO [Listener at localhost/42185] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 08:57:31,290 DEBUG [Listener at localhost/42185] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 08:57:31,291 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,293 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,293 INFO [Listener at localhost/42185] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37359 connecting to ZooKeeper ensemble=127.0.0.1:52426 2023-06-03 08:57:31,297 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:373590x0, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:57:31,298 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): regionserver:373590x0, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:57:31,298 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37359-0x1008fe8642d0001 connected 2023-06-03 08:57:31,299 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:57:31,299 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:57:31,301 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37359 2023-06-03 08:57:31,302 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37359 2023-06-03 08:57:31,306 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37359 2023-06-03 08:57:31,306 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37359 2023-06-03 08:57:31,310 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37359 2023-06-03 08:57:31,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,312 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:57:31,313 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,315 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:57:31,315 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:57:31,315 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:31,317 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:57:31,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:57:31,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40165,1685782651218 from backup master directory 2023-06-03 08:57:31,319 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,319 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:57:31,319 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:57:31,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/hbase.id with ID: c860e741-55ce-48a4-9371-2deb8a3431fd 2023-06-03 08:57:31,353 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:31,357 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:31,367 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6e2d83cc to 127.0.0.1:52426 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:57:31,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7755add, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:57:31,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:57:31,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-03 08:57:31,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:57:31,374 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store-tmp 2023-06-03 08:57:31,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:57:31,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:57:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:31,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:57:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:57:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40165%2C1685782651218, suffix=, logDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218, archiveDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/oldWALs, maxLogs=10 2023-06-03 08:57:31,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218/jenkins-hbase4.apache.org%2C40165%2C1685782651218.1685782651394 2023-06-03 08:57:31,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] 2023-06-03 08:57:31,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:57:31,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:31,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:57:31,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:57:31,409 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:57:31,411 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-03 08:57:31,412 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-03 08:57:31,413 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:57:31,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:57:31,418 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:57:31,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:57:31,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=776426, jitterRate=-0.012724250555038452}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:57:31,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:57:31,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-03 08:57:31,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-03 08:57:31,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-03 08:57:31,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-03 08:57:31,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-03 08:57:31,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-03 08:57:31,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-03 08:57:31,431 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-03 08:57:31,434 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-03 08:57:31,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-03 08:57:31,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-03 08:57:31,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-03 08:57:31,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-03 08:57:31,454 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-03 08:57:31,458 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:31,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-03 08:57:31,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-03 08:57:31,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-03 08:57:31,462 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:57:31,463 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:57:31,463 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:31,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40165,1685782651218, sessionid=0x1008fe8642d0000, setting cluster-up flag (Was=false) 2023-06-03 08:57:31,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-03 08:57:31,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,480 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:31,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-03 08:57:31,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:31,487 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.hbase-snapshot/.tmp 2023-06-03 08:57:31,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:57:31,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685782681499 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-03 08:57:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-03 08:57:31,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-03 08:57:31,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-03 08:57:31,501 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:57:31,501 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-03 08:57:31,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-03 08:57:31,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-03 08:57:31,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782651502,5,FailOnTimeoutGroup] 2023-06-03 08:57:31,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782651502,5,FailOnTimeoutGroup] 2023-06-03 08:57:31,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-03 08:57:31,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,503 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:57:31,513 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(951): ClusterId : c860e741-55ce-48a4-9371-2deb8a3431fd 2023-06-03 08:57:31,515 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 08:57:31,525 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:57:31,526 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 08:57:31,526 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 08:57:31,527 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:57:31,527 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225 2023-06-03 08:57:31,530 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 08:57:31,531 DEBUG [RS:0;jenkins-hbase4:37359] zookeeper.ReadOnlyZKClient(139): Connect 0x4ea76624 to 127.0.0.1:52426 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:57:31,541 DEBUG [RS:0;jenkins-hbase4:37359] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40609180, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:57:31,541 DEBUG [RS:0;jenkins-hbase4:37359] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@445f0d54, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:57:31,546 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:31,549 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:57:31,551 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/info 2023-06-03 08:57:31,551 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:57:31,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:57:31,554 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:57:31,555 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:57:31,556 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37359 2023-06-03 08:57:31,556 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,556 INFO [RS:0;jenkins-hbase4:37359] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 08:57:31,556 INFO [RS:0;jenkins-hbase4:37359] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 08:57:31,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:57:31,557 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 08:57:31,557 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,40165,1685782651218 with isa=jenkins-hbase4.apache.org/172.31.14.131:37359, startcode=1685782651287 2023-06-03 08:57:31,558 DEBUG [RS:0;jenkins-hbase4:37359] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 08:57:31,558 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/table 2023-06-03 08:57:31,559 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:57:31,560 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,562 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740 2023-06-03 08:57:31,563 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740 2023-06-03 08:57:31,565 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46533, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 08:57:31,565 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:57:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,567 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225 2023-06-03 08:57:31,567 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35767 2023-06-03 08:57:31,567 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 08:57:31,568 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:57:31,569 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:57:31,570 DEBUG [RS:0;jenkins-hbase4:37359] zookeeper.ZKUtil(162): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,570 WARN [RS:0;jenkins-hbase4:37359] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:57:31,570 INFO [RS:0;jenkins-hbase4:37359] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:57:31,571 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,572 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37359,1685782651287] 2023-06-03 08:57:31,574 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:57:31,576 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=869287, jitterRate=0.1053561121225357}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:57:31,576 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:57:31,576 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:57:31,576 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:57:31,576 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:57:31,576 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:57:31,576 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:57:31,577 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 08:57:31,577 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:57:31,579 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:57:31,579 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-03 08:57:31,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-03 08:57:31,585 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-03 08:57:31,587 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-03 08:57:31,588 DEBUG [RS:0;jenkins-hbase4:37359] zookeeper.ZKUtil(162): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,589 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 08:57:31,589 INFO [RS:0;jenkins-hbase4:37359] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 08:57:31,595 INFO [RS:0;jenkins-hbase4:37359] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 08:57:31,595 INFO [RS:0;jenkins-hbase4:37359] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 08:57:31,595 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,598 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 08:57:31,600 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,600 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,601 DEBUG [RS:0;jenkins-hbase4:37359] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:31,609 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,609 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,609 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,625 INFO [RS:0;jenkins-hbase4:37359] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 08:57:31,626 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37359,1685782651287-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,640 INFO [RS:0;jenkins-hbase4:37359] regionserver.Replication(203): jenkins-hbase4.apache.org,37359,1685782651287 started 2023-06-03 08:57:31,641 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37359,1685782651287, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37359, sessionid=0x1008fe8642d0001 2023-06-03 08:57:31,641 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 08:57:31,641 DEBUG [RS:0;jenkins-hbase4:37359] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,641 DEBUG [RS:0;jenkins-hbase4:37359] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37359,1685782651287' 2023-06-03 08:57:31,641 DEBUG [RS:0;jenkins-hbase4:37359] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:57:31,642 DEBUG [RS:0;jenkins-hbase4:37359] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:57:31,642 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 08:57:31,642 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 08:57:31,643 DEBUG [RS:0;jenkins-hbase4:37359] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,643 DEBUG [RS:0;jenkins-hbase4:37359] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37359,1685782651287' 2023-06-03 08:57:31,643 DEBUG [RS:0;jenkins-hbase4:37359] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 08:57:31,644 DEBUG [RS:0;jenkins-hbase4:37359] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 08:57:31,644 DEBUG [RS:0;jenkins-hbase4:37359] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 08:57:31,645 INFO [RS:0;jenkins-hbase4:37359] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 08:57:31,645 INFO [RS:0;jenkins-hbase4:37359] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 08:57:31,737 DEBUG [jenkins-hbase4:40165] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-03 08:57:31,738 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37359,1685782651287, state=OPENING 2023-06-03 08:57:31,741 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-03 08:57:31,743 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:31,743 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37359,1685782651287}] 2023-06-03 08:57:31,743 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:57:31,748 INFO [RS:0;jenkins-hbase4:37359] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37359%2C1685782651287, suffix=, logDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287, archiveDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/oldWALs, maxLogs=32 2023-06-03 08:57:31,773 INFO [RS:0;jenkins-hbase4:37359] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782651749 2023-06-03 08:57:31,773 DEBUG [RS:0;jenkins-hbase4:37359] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] 2023-06-03 08:57:31,898 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:31,899 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-03 08:57:31,901 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52672, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-03 08:57:31,906 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-03 08:57:31,906 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:57:31,908 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta, suffix=.meta, logDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287, archiveDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/oldWALs, maxLogs=32 2023-06-03 08:57:31,922 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta.1685782651910.meta 2023-06-03 08:57:31,922 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK], DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]] 2023-06-03 08:57:31,922 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:57:31,923 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-03 08:57:31,923 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-03 08:57:31,923 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-03 08:57:31,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-03 08:57:31,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:31,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-03 08:57:31,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-03 08:57:31,925 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:57:31,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/info 2023-06-03 08:57:31,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/info 2023-06-03 08:57:31,927 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:57:31,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:57:31,928 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:57:31,928 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:57:31,929 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:57:31,929 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,929 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:57:31,930 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/table 2023-06-03 08:57:31,930 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740/table 2023-06-03 08:57:31,932 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:57:31,933 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:31,934 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740 2023-06-03 08:57:31,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/meta/1588230740 2023-06-03 08:57:31,937 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:57:31,939 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:57:31,939 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=709109, jitterRate=-0.09832161664962769}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:57:31,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:57:31,941 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685782651898 2023-06-03 08:57:31,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-03 08:57:31,945 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-03 08:57:31,945 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37359,1685782651287, state=OPEN 2023-06-03 08:57:31,947 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-03 08:57:31,947 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:57:31,950 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-03 08:57:31,950 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37359,1685782651287 in 204 msec 2023-06-03 08:57:31,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-03 08:57:31,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 371 msec 2023-06-03 08:57:31,955 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 462 msec 2023-06-03 08:57:31,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685782651955, completionTime=-1 2023-06-03 08:57:31,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-03 08:57:31,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-03 08:57:31,958 DEBUG [hconnection-0x73a919c0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:57:31,960 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52678, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:57:31,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-03 08:57:31,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685782711962 2023-06-03 08:57:31,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685782771962 2023-06-03 08:57:31,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40165,1685782651218-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40165,1685782651218-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40165,1685782651218-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40165, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-03 08:57:31,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:57:31,970 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-03 08:57:31,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-03 08:57:31,972 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:57:31,973 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:57:31,975 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:31,975 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1 empty. 2023-06-03 08:57:31,976 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:31,976 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-03 08:57:31,988 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-03 08:57:31,990 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8d0a8809e1ea4f068d76b3e5472fa4b1, NAME => 'hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp 2023-06-03 08:57:31,999 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:32,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8d0a8809e1ea4f068d76b3e5472fa4b1, disabling compactions & flushes 2023-06-03 08:57:32,000 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. after waiting 0 ms 2023-06-03 08:57:32,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,000 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8d0a8809e1ea4f068d76b3e5472fa4b1: 2023-06-03 08:57:32,003 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:57:32,005 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782652005"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782652005"}]},"ts":"1685782652005"} 2023-06-03 08:57:32,008 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:57:32,011 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:57:32,011 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782652011"}]},"ts":"1685782652011"} 2023-06-03 08:57:32,013 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-03 08:57:32,018 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8d0a8809e1ea4f068d76b3e5472fa4b1, ASSIGN}] 2023-06-03 08:57:32,020 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8d0a8809e1ea4f068d76b3e5472fa4b1, ASSIGN 2023-06-03 08:57:32,021 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8d0a8809e1ea4f068d76b3e5472fa4b1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37359,1685782651287; forceNewPlan=false, retain=false 2023-06-03 08:57:32,172 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8d0a8809e1ea4f068d76b3e5472fa4b1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:32,173 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782652172"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782652172"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782652172"}]},"ts":"1685782652172"} 2023-06-03 08:57:32,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8d0a8809e1ea4f068d76b3e5472fa4b1, server=jenkins-hbase4.apache.org,37359,1685782651287}] 2023-06-03 08:57:32,334 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8d0a8809e1ea4f068d76b3e5472fa4b1, NAME => 'hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:57:32,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:32,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,336 INFO [StoreOpener-8d0a8809e1ea4f068d76b3e5472fa4b1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,338 DEBUG [StoreOpener-8d0a8809e1ea4f068d76b3e5472fa4b1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/info 2023-06-03 08:57:32,338 DEBUG [StoreOpener-8d0a8809e1ea4f068d76b3e5472fa4b1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/info 2023-06-03 08:57:32,338 INFO [StoreOpener-8d0a8809e1ea4f068d76b3e5472fa4b1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8d0a8809e1ea4f068d76b3e5472fa4b1 columnFamilyName info 2023-06-03 08:57:32,339 INFO [StoreOpener-8d0a8809e1ea4f068d76b3e5472fa4b1-1] regionserver.HStore(310): Store=8d0a8809e1ea4f068d76b3e5472fa4b1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:32,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:57:32,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:57:32,348 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8d0a8809e1ea4f068d76b3e5472fa4b1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=847655, jitterRate=0.07784946262836456}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:57:32,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8d0a8809e1ea4f068d76b3e5472fa4b1: 2023-06-03 08:57:32,350 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1., pid=6, masterSystemTime=1685782652328 2023-06-03 08:57:32,353 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,353 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:57:32,354 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8d0a8809e1ea4f068d76b3e5472fa4b1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:32,354 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782652353"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782652353"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782652353"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782652353"}]},"ts":"1685782652353"} 2023-06-03 08:57:32,358 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-03 08:57:32,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8d0a8809e1ea4f068d76b3e5472fa4b1, server=jenkins-hbase4.apache.org,37359,1685782651287 in 181 msec 2023-06-03 08:57:32,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-03 08:57:32,363 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8d0a8809e1ea4f068d76b3e5472fa4b1, ASSIGN in 341 msec 2023-06-03 08:57:32,364 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:57:32,364 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782652364"}]},"ts":"1685782652364"} 2023-06-03 08:57:32,366 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-03 08:57:32,369 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:57:32,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-03 08:57:32,372 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 401 msec 2023-06-03 08:57:32,373 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:57:32,373 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:32,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-03 08:57:32,387 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:57:32,391 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-03 08:57:32,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-03 08:57:32,409 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:57:32,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-03 08:57:32,424 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-03 08:57:32,427 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-03 08:57:32,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.108sec 2023-06-03 08:57:32,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-03 08:57:32,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-03 08:57:32,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-03 08:57:32,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40165,1685782651218-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-03 08:57:32,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40165,1685782651218-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-03 08:57:32,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-03 08:57:32,513 DEBUG [Listener at localhost/42185] zookeeper.ReadOnlyZKClient(139): Connect 0x73b4a6b8 to 127.0.0.1:52426 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:57:32,518 DEBUG [Listener at localhost/42185] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b905416, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:57:32,519 DEBUG [hconnection-0x6fd746aa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:57:32,521 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52694, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:57:32,523 INFO [Listener at localhost/42185] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:57:32,524 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:32,528 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-03 08:57:32,528 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:57:32,529 INFO [Listener at localhost/42185] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-03 08:57:32,542 INFO [Listener at localhost/42185] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:57:32,543 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:32,543 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:32,543 INFO [Listener at localhost/42185] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:57:32,543 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:57:32,543 INFO [Listener at localhost/42185] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:57:32,543 INFO [Listener at localhost/42185] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:57:32,545 INFO [Listener at localhost/42185] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38719 2023-06-03 08:57:32,545 INFO [Listener at localhost/42185] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 08:57:32,549 DEBUG [Listener at localhost/42185] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 08:57:32,550 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:32,552 INFO [Listener at localhost/42185] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:57:32,553 INFO [Listener at localhost/42185] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38719 connecting to ZooKeeper ensemble=127.0.0.1:52426 2023-06-03 08:57:32,556 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:387190x0, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:57:32,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38719-0x1008fe8642d0005 connected 2023-06-03 08:57:32,557 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(162): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:57:32,558 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(162): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-03 08:57:32,559 DEBUG [Listener at localhost/42185] zookeeper.ZKUtil(164): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:57:32,559 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38719 2023-06-03 08:57:32,559 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38719 2023-06-03 08:57:32,560 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38719 2023-06-03 08:57:32,560 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38719 2023-06-03 08:57:32,560 DEBUG [Listener at localhost/42185] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38719 2023-06-03 08:57:32,562 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(951): ClusterId : c860e741-55ce-48a4-9371-2deb8a3431fd 2023-06-03 08:57:32,563 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 08:57:32,566 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 08:57:32,566 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 08:57:32,568 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 08:57:32,569 DEBUG [RS:1;jenkins-hbase4:38719] zookeeper.ReadOnlyZKClient(139): Connect 0x1c173c2f to 127.0.0.1:52426 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:57:32,574 DEBUG [RS:1;jenkins-hbase4:38719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12d0c1dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:57:32,574 DEBUG [RS:1;jenkins-hbase4:38719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16ef575f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:57:32,584 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38719 2023-06-03 08:57:32,584 INFO [RS:1;jenkins-hbase4:38719] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 08:57:32,585 INFO [RS:1;jenkins-hbase4:38719] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 08:57:32,585 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 08:57:32,585 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,40165,1685782651218 with isa=jenkins-hbase4.apache.org/172.31.14.131:38719, startcode=1685782652542 2023-06-03 08:57:32,585 DEBUG [RS:1;jenkins-hbase4:38719] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 08:57:32,589 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39967, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 08:57:32,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,589 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225 2023-06-03 08:57:32,590 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35767 2023-06-03 08:57:32,590 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 08:57:32,591 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:57:32,591 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:57:32,591 DEBUG [RS:1;jenkins-hbase4:38719] zookeeper.ZKUtil(162): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,592 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38719,1685782652542] 2023-06-03 08:57:32,592 WARN [RS:1;jenkins-hbase4:38719] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:57:32,592 INFO [RS:1;jenkins-hbase4:38719] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:57:32,592 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:32,592 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,592 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,599 DEBUG [RS:1;jenkins-hbase4:38719] zookeeper.ZKUtil(162): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:32,599 DEBUG [RS:1;jenkins-hbase4:38719] zookeeper.ZKUtil(162): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,600 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 08:57:32,601 INFO [RS:1;jenkins-hbase4:38719] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 08:57:32,605 INFO [RS:1;jenkins-hbase4:38719] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 08:57:32,606 INFO [RS:1;jenkins-hbase4:38719] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 08:57:32,607 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:32,608 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 08:57:32,609 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:32,610 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,610 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,610 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,610 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,611 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,611 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:57:32,611 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,611 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,611 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,611 DEBUG [RS:1;jenkins-hbase4:38719] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:57:32,614 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:32,614 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:32,615 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:32,631 INFO [RS:1;jenkins-hbase4:38719] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 08:57:32,632 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38719,1685782652542-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:57:32,646 INFO [RS:1;jenkins-hbase4:38719] regionserver.Replication(203): jenkins-hbase4.apache.org,38719,1685782652542 started 2023-06-03 08:57:32,646 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38719,1685782652542, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38719, sessionid=0x1008fe8642d0005 2023-06-03 08:57:32,646 INFO [Listener at localhost/42185] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:38719,5,FailOnTimeoutGroup] 2023-06-03 08:57:32,646 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 08:57:32,646 INFO [Listener at localhost/42185] wal.TestLogRolling(323): Replication=2 2023-06-03 08:57:32,646 DEBUG [RS:1;jenkins-hbase4:38719] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,647 DEBUG [RS:1;jenkins-hbase4:38719] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38719,1685782652542' 2023-06-03 08:57:32,648 DEBUG [RS:1;jenkins-hbase4:38719] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:57:32,648 DEBUG [RS:1;jenkins-hbase4:38719] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:57:32,650 DEBUG [Listener at localhost/42185] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-03 08:57:32,650 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 08:57:32,650 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 08:57:32,650 DEBUG [RS:1;jenkins-hbase4:38719] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:57:32,651 DEBUG [RS:1;jenkins-hbase4:38719] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38719,1685782652542' 2023-06-03 08:57:32,651 DEBUG [RS:1;jenkins-hbase4:38719] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 08:57:32,652 DEBUG [RS:1;jenkins-hbase4:38719] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 08:57:32,653 DEBUG [RS:1;jenkins-hbase4:38719] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 08:57:32,653 INFO [RS:1;jenkins-hbase4:38719] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 08:57:32,653 INFO [RS:1;jenkins-hbase4:38719] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 08:57:32,654 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45934, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-03 08:57:32,656 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-03 08:57:32,656 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-03 08:57:32,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:57:32,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-03 08:57:32,661 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:57:32,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-03 08:57:32,662 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:57:32,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:57:32,665 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:32,665 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d empty. 2023-06-03 08:57:32,666 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:32,666 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-03 08:57:32,683 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-03 08:57:32,685 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 20d31b306b73cae32005b2495e3cef7d, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/.tmp 2023-06-03 08:57:32,704 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:32,704 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 20d31b306b73cae32005b2495e3cef7d, disabling compactions & flushes 2023-06-03 08:57:32,704 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:32,704 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:32,705 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. after waiting 0 ms 2023-06-03 08:57:32,705 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:32,705 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:32,705 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 20d31b306b73cae32005b2495e3cef7d: 2023-06-03 08:57:32,708 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:57:32,709 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685782652709"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782652709"}]},"ts":"1685782652709"} 2023-06-03 08:57:32,711 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:57:32,713 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:57:32,713 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782652713"}]},"ts":"1685782652713"} 2023-06-03 08:57:32,715 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-03 08:57:32,722 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-06-03 08:57:32,724 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-03 08:57:32,724 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-03 08:57:32,724 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-03 08:57:32,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=20d31b306b73cae32005b2495e3cef7d, ASSIGN}] 2023-06-03 08:57:32,727 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=20d31b306b73cae32005b2495e3cef7d, ASSIGN 2023-06-03 08:57:32,728 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=20d31b306b73cae32005b2495e3cef7d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37359,1685782651287; forceNewPlan=false, retain=false 2023-06-03 08:57:32,756 INFO [RS:1;jenkins-hbase4:38719] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38719%2C1685782652542, suffix=, logDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,38719,1685782652542, archiveDir=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/oldWALs, maxLogs=32 2023-06-03 08:57:32,768 INFO [RS:1;jenkins-hbase4:38719] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,38719,1685782652542/jenkins-hbase4.apache.org%2C38719%2C1685782652542.1685782652757 2023-06-03 08:57:32,768 DEBUG [RS:1;jenkins-hbase4:38719] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK], DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]] 2023-06-03 08:57:32,881 INFO [jenkins-hbase4:40165] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-03 08:57:32,882 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=20d31b306b73cae32005b2495e3cef7d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:32,882 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685782652881"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782652881"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782652881"}]},"ts":"1685782652881"} 2023-06-03 08:57:32,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 20d31b306b73cae32005b2495e3cef7d, server=jenkins-hbase4.apache.org,37359,1685782651287}] 2023-06-03 08:57:33,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:33,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 20d31b306b73cae32005b2495e3cef7d, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:57:33,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:57:33,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,045 INFO [StoreOpener-20d31b306b73cae32005b2495e3cef7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,046 DEBUG [StoreOpener-20d31b306b73cae32005b2495e3cef7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info 2023-06-03 08:57:33,046 DEBUG [StoreOpener-20d31b306b73cae32005b2495e3cef7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info 2023-06-03 08:57:33,047 INFO [StoreOpener-20d31b306b73cae32005b2495e3cef7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 20d31b306b73cae32005b2495e3cef7d columnFamilyName info 2023-06-03 08:57:33,048 INFO [StoreOpener-20d31b306b73cae32005b2495e3cef7d-1] regionserver.HStore(310): Store=20d31b306b73cae32005b2495e3cef7d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:57:33,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:33,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:57:33,057 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 20d31b306b73cae32005b2495e3cef7d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=757544, jitterRate=-0.03673367202281952}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:57:33,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 20d31b306b73cae32005b2495e3cef7d: 2023-06-03 08:57:33,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d., pid=11, masterSystemTime=1685782653037 2023-06-03 08:57:33,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:33,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:33,061 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=20d31b306b73cae32005b2495e3cef7d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:57:33,061 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685782653061"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782653061"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782653061"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782653061"}]},"ts":"1685782653061"} 2023-06-03 08:57:33,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-03 08:57:33,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 20d31b306b73cae32005b2495e3cef7d, server=jenkins-hbase4.apache.org,37359,1685782651287 in 179 msec 2023-06-03 08:57:33,069 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-03 08:57:33,069 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=20d31b306b73cae32005b2495e3cef7d, ASSIGN in 341 msec 2023-06-03 08:57:33,070 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:57:33,070 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782653070"}]},"ts":"1685782653070"} 2023-06-03 08:57:33,072 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-03 08:57:33,075 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:57:33,077 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 420 msec 2023-06-03 08:57:35,311 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-03 08:57:37,589 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-03 08:57:37,590 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-03 08:57:37,590 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-03 08:57:42,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:57:42,665 INFO [Listener at localhost/42185] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-03 08:57:42,667 DEBUG [Listener at localhost/42185] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-03 08:57:42,667 DEBUG [Listener at localhost/42185] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:57:42,682 WARN [Listener at localhost/42185] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:57:42,684 WARN [Listener at localhost/42185] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:57:42,686 INFO [Listener at localhost/42185] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:57:42,691 INFO [Listener at localhost/42185] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_35905_datanode____6lvi9v/webapp 2023-06-03 08:57:42,840 INFO [Listener at localhost/42185] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35905 2023-06-03 08:57:42,848 WARN [Listener at localhost/41063] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:57:42,867 WARN [Listener at localhost/41063] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:57:42,869 WARN [Listener at localhost/41063] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:57:42,871 INFO [Listener at localhost/41063] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:57:42,875 INFO [Listener at localhost/41063] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_36295_datanode____wihur6/webapp 2023-06-03 08:57:42,946 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb4f227bc961dabe: Processing first storage report for DS-47bc18bc-5004-40a6-bb8f-c7db108f408c from datanode 62066c82-e951-4e78-8a40-7612e4746d85 2023-06-03 08:57:42,946 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb4f227bc961dabe: from storage DS-47bc18bc-5004-40a6-bb8f-c7db108f408c node DatanodeRegistration(127.0.0.1:39217, datanodeUuid=62066c82-e951-4e78-8a40-7612e4746d85, infoPort=41435, infoSecurePort=0, ipcPort=41063, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:42,946 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb4f227bc961dabe: Processing first storage report for DS-ac29ab1b-1bfe-4a39-8c68-1b3b97732c02 from datanode 62066c82-e951-4e78-8a40-7612e4746d85 2023-06-03 08:57:42,946 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb4f227bc961dabe: from storage DS-ac29ab1b-1bfe-4a39-8c68-1b3b97732c02 node DatanodeRegistration(127.0.0.1:39217, datanodeUuid=62066c82-e951-4e78-8a40-7612e4746d85, infoPort=41435, infoSecurePort=0, ipcPort=41063, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:42,979 INFO [Listener at localhost/41063] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36295 2023-06-03 08:57:42,986 WARN [Listener at localhost/34981] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:57:43,005 WARN [Listener at localhost/34981] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:57:43,008 WARN [Listener at localhost/34981] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:57:43,009 INFO [Listener at localhost/34981] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:57:43,012 INFO [Listener at localhost/34981] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_36109_datanode____.7lxyoy/webapp 2023-06-03 08:57:43,085 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48824a296b067a99: Processing first storage report for DS-042f6136-5d07-4e66-a698-069c60376516 from datanode 165e39e4-ff1a-4e3c-8d47-956cb1c799ab 2023-06-03 08:57:43,086 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48824a296b067a99: from storage DS-042f6136-5d07-4e66-a698-069c60376516 node DatanodeRegistration(127.0.0.1:37741, datanodeUuid=165e39e4-ff1a-4e3c-8d47-956cb1c799ab, infoPort=45431, infoSecurePort=0, ipcPort=34981, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:43,086 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48824a296b067a99: Processing first storage report for DS-e7b2180c-dcb8-4e16-a874-1742f5c122f3 from datanode 165e39e4-ff1a-4e3c-8d47-956cb1c799ab 2023-06-03 08:57:43,086 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48824a296b067a99: from storage DS-e7b2180c-dcb8-4e16-a874-1742f5c122f3 node DatanodeRegistration(127.0.0.1:37741, datanodeUuid=165e39e4-ff1a-4e3c-8d47-956cb1c799ab, infoPort=45431, infoSecurePort=0, ipcPort=34981, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:43,112 INFO [Listener at localhost/34981] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36109 2023-06-03 08:57:43,120 WARN [Listener at localhost/37483] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:57:43,216 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa172ee3f0ab685ee: Processing first storage report for DS-db2dee29-eee7-45df-b9c8-8b66eced9c47 from datanode 39843660-a398-4fea-ab57-ac4ebe63583a 2023-06-03 08:57:43,216 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa172ee3f0ab685ee: from storage DS-db2dee29-eee7-45df-b9c8-8b66eced9c47 node DatanodeRegistration(127.0.0.1:37977, datanodeUuid=39843660-a398-4fea-ab57-ac4ebe63583a, infoPort=34595, infoSecurePort=0, ipcPort=37483, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:43,216 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa172ee3f0ab685ee: Processing first storage report for DS-de9a6a86-86a0-442b-9e5e-53b87c1ff5b8 from datanode 39843660-a398-4fea-ab57-ac4ebe63583a 2023-06-03 08:57:43,216 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa172ee3f0ab685ee: from storage DS-de9a6a86-86a0-442b-9e5e-53b87c1ff5b8 node DatanodeRegistration(127.0.0.1:37977, datanodeUuid=39843660-a398-4fea-ab57-ac4ebe63583a, infoPort=34595, infoSecurePort=0, ipcPort=37483, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:57:43,227 WARN [Listener at localhost/37483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:57:43,228 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:43,232 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-03 08:57:43,233 WARN [DataStreamer for file /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782651749 block BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]) is bad. 2023-06-03 08:57:43,231 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-03 08:57:43,235 WARN [DataStreamer for file /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta.1685782651910.meta block BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK], DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]) is bad. 2023-06-03 08:57:43,235 WARN [PacketResponder: BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40263]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,229 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:43,235 WARN [PacketResponder: BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40263]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,235 WARN [DataStreamer for file /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218/jenkins-hbase4.apache.org%2C40165%2C1685782651218.1685782651394 block BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]) is bad. 2023-06-03 08:57:43,235 WARN [DataStreamer for file /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,38719,1685782652542/jenkins-hbase4.apache.org%2C38719%2C1685782652542.1685782652757 block BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK], DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]) is bad. 2023-06-03 08:57:43,245 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:37304 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37304 dst: /127.0.0.1:36567 java.io.IOException: Interrupted receiveBlock at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:1067) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,249 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1056346902_17 at /127.0.0.1:37348 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37348 dst: /127.0.0.1:36567 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:406) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,250 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:37262 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37262 dst: /127.0.0.1:36567 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:36567 remote=/127.0.0.1:37262]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,250 INFO [Listener at localhost/37483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:43,251 WARN [PacketResponder: BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36567]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,252 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:37294 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37294 dst: /127.0.0.1:36567 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:36567 remote=/127.0.0.1:37294]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,253 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:56024 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40263:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56024 dst: /127.0.0.1:40263 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,253 WARN [PacketResponder: BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36567]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,255 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:56052 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40263:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56052 dst: /127.0.0.1:40263 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,353 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:56074 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40263:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56074 dst: /127.0.0.1:40263 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,354 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1056346902_17 at /127.0.0.1:56138 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40263:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56138 dst: /127.0.0.1:40263 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,354 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:57:43,355 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106091895-172.31.14.131-1685782650553 (Datanode Uuid e844c180-729e-49be-ab3e-30ed2e40f87e) service to localhost/127.0.0.1:35767 2023-06-03 08:57:43,356 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data3/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:43,356 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data4/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:43,358 WARN [Listener at localhost/37483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:57:43,358 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:43,358 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:43,359 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:43,358 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:43,362 INFO [Listener at localhost/37483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:43,465 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:37438 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37438 dst: /127.0.0.1:36567 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,468 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1056346902_17 at /127.0.0.1:37476 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37476 dst: /127.0.0.1:36567 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,467 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:37460 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37460 dst: /127.0.0.1:36567 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,466 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:37452 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:36567:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37452 dst: /127.0.0.1:36567 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:43,468 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:57:43,470 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106091895-172.31.14.131-1685782650553 (Datanode Uuid a15755e5-8f7d-482e-aa83-cda5814d50cc) service to localhost/127.0.0.1:35767 2023-06-03 08:57:43,471 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data1/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:43,471 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data2/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:43,477 WARN [RS:0;jenkins-hbase4:37359.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:57:43,478 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37359%2C1685782651287:(num 1685782651749) roll requested 2023-06-03 08:57:43,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:57:43,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:52694 deadline: 1685782673476, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-03 08:57:43,494 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-03 08:57:43,494 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782651749 with entries=4, filesize=983 B; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 2023-06-03 08:57:43,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK], DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK]] 2023-06-03 08:57:43,495 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782651749 is not closed yet, will try archiving it next time 2023-06-03 08:57:43,495 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:57:43,496 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782651749; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:57:55,560 INFO [Listener at localhost/37483] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 2023-06-03 08:57:55,561 WARN [Listener at localhost/37483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:57:55,562 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:55,562 WARN [DataStreamer for file /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 block BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK], DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK]) is bad. 2023-06-03 08:57:55,566 INFO [Listener at localhost/37483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:55,567 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:44794 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:39217:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44794 dst: /127.0.0.1:39217 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39217 remote=/127.0.0.1:44794]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:55,568 WARN [PacketResponder: BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39217]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:55,569 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:55222 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:37977:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55222 dst: /127.0.0.1:37977 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:55,671 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:57:55,671 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106091895-172.31.14.131-1685782650553 (Datanode Uuid 39843660-a398-4fea-ab57-ac4ebe63583a) service to localhost/127.0.0.1:35767 2023-06-03 08:57:55,672 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data9/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:55,672 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data10/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:55,676 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK]] 2023-06-03 08:57:55,676 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK]] 2023-06-03 08:57:55,677 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37359%2C1685782651287:(num 1685782663478) roll requested 2023-06-03 08:57:55,681 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:49706 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741840_1021]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data6/current]'}, localName='127.0.0.1:39217', datanodeUuid='62066c82-e951-4e78-8a40-7612e4746d85', xmitsInProgress=0}:Exception transfering block BP-106091895-172.31.14.131-1685782650553:blk_1073741840_1021 to mirror 127.0.0.1:36567: java.net.ConnectException: Connection refused 2023-06-03 08:57:55,682 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741840_1021 2023-06-03 08:57:55,682 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:49706 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:39217:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49706 dst: /127.0.0.1:39217 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:55,684 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK] 2023-06-03 08:57:55,688 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741841_1022 2023-06-03 08:57:55,688 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:57:55,690 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741842_1023 2023-06-03 08:57:55,691 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK] 2023-06-03 08:57:55,696 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782675677 2023-06-03 08:57:55,696 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK], DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:57:55,697 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 is not closed yet, will try archiving it next time 2023-06-03 08:57:57,961 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5440807c] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39217, datanodeUuid=62066c82-e951-4e78-8a40-7612e4746d85, infoPort=41435, infoSecurePort=0, ipcPort=41063, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741839_1020 to 127.0.0.1:37977 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:59,681 WARN [Listener at localhost/37483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:57:59,682 WARN [ResponseProcessor for block BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:57:59,683 WARN [DataStreamer for file /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782675677 block BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024] hdfs.DataStreamer(1548): Error Recovery for BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK], DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK]) is bad. 2023-06-03 08:57:59,686 INFO [Listener at localhost/37483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:57:59,686 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:54176 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:37741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54176 dst: /127.0.0.1:37741 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:37741 remote=/127.0.0.1:54176]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:59,687 WARN [PacketResponder: BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37741]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:59,689 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:49718 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:39217:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49718 dst: /127.0.0.1:39217 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:59,792 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:57:59,792 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106091895-172.31.14.131-1685782650553 (Datanode Uuid 62066c82-e951-4e78-8a40-7612e4746d85) service to localhost/127.0.0.1:35767 2023-06-03 08:57:59,793 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data5/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:59,793 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data6/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:57:59,798 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:57:59,798 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:57:59,798 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37359%2C1685782651287:(num 1685782675677) roll requested 2023-06-03 08:57:59,802 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741844_1026 2023-06-03 08:57:59,802 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK] 2023-06-03 08:57:59,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] regionserver.HRegion(9158): Flush requested on 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:57:59,804 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 20d31b306b73cae32005b2495e3cef7d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 08:57:59,805 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741845_1027 2023-06-03 08:57:59,806 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:57:59,808 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741846_1028 2023-06-03 08:57:59,810 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK] 2023-06-03 08:57:59,812 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741847_1029 2023-06-03 08:57:59,813 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:57:59,813 WARN [IPC Server handler 3 on default port 35767] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-03 08:57:59,814 WARN [IPC Server handler 3 on default port 35767] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-03 08:57:59,814 WARN [IPC Server handler 3 on default port 35767] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-03 08:57:59,815 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741849_1031 2023-06-03 08:57:59,816 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:57:59,817 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741850_1032 2023-06-03 08:57:59,818 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:57:59,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:54200 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data8/current]'}, localName='127.0.0.1:37741', datanodeUuid='165e39e4-ff1a-4e3c-8d47-956cb1c799ab', xmitsInProgress=0}:Exception transfering block BP-106091895-172.31.14.131-1685782650553:blk_1073741851_1033 to mirror 127.0.0.1:36567: java.net.ConnectException: Connection refused 2023-06-03 08:57:59,821 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741851_1033 2023-06-03 08:57:59,821 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782675677 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782679798 2023-06-03 08:57:59,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:54200 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:37741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54200 dst: /127.0.0.1:37741 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:57:59,821 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:57:59,822 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782675677 is not closed yet, will try archiving it next time 2023-06-03 08:57:59,822 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK] 2023-06-03 08:57:59,824 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741852_1034 2023-06-03 08:57:59,825 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK] 2023-06-03 08:57:59,825 WARN [IPC Server handler 1 on default port 35767] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-03 08:57:59,825 WARN [IPC Server handler 1 on default port 35767] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-03 08:57:59,825 WARN [IPC Server handler 1 on default port 35767] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-03 08:58:00,021 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:58:00,021 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:58:00,022 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37359%2C1685782651287:(num 1685782679798) roll requested 2023-06-03 08:58:00,026 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:54224 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741854_1036]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data8/current]'}, localName='127.0.0.1:37741', datanodeUuid='165e39e4-ff1a-4e3c-8d47-956cb1c799ab', xmitsInProgress=0}:Exception transfering block BP-106091895-172.31.14.131-1685782650553:blk_1073741854_1036 to mirror 127.0.0.1:37977: java.net.ConnectException: Connection refused 2023-06-03 08:58:00,026 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741854_1036 2023-06-03 08:58:00,026 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:54224 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741854_1036]] datanode.DataXceiver(323): 127.0.0.1:37741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54224 dst: /127.0.0.1:37741 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:00,026 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:00,027 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741855_1037 2023-06-03 08:58:00,028 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK] 2023-06-03 08:58:00,029 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741856_1038 2023-06-03 08:58:00,030 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:58:00,031 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741857_1039 2023-06-03 08:58:00,031 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40263,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK] 2023-06-03 08:58:00,032 WARN [IPC Server handler 2 on default port 35767] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-03 08:58:00,032 WARN [IPC Server handler 2 on default port 35767] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-03 08:58:00,032 WARN [IPC Server handler 2 on default port 35767] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-03 08:58:00,036 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782679798 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782680022 2023-06-03 08:58:00,036 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:58:00,036 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782679798 is not closed yet, will try archiving it next time 2023-06-03 08:58:00,224 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-03 08:58:00,231 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/.tmp/info/94625287b9a1402b887569d3555063c2 2023-06-03 08:58:00,240 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/.tmp/info/94625287b9a1402b887569d3555063c2 as hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info/94625287b9a1402b887569d3555063c2 2023-06-03 08:58:00,245 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info/94625287b9a1402b887569d3555063c2, entries=5, sequenceid=12, filesize=10.0 K 2023-06-03 08:58:00,246 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 20d31b306b73cae32005b2495e3cef7d in 442ms, sequenceid=12, compaction requested=false 2023-06-03 08:58:00,246 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 20d31b306b73cae32005b2495e3cef7d: 2023-06-03 08:58:00,430 WARN [Listener at localhost/37483] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:00,433 WARN [Listener at localhost/37483] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:00,434 INFO [Listener at localhost/37483] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:00,438 INFO [Listener at localhost/37483] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/java.io.tmpdir/Jetty_localhost_43779_datanode____.aw74bj/webapp 2023-06-03 08:58:00,440 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 to hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/oldWALs/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782663478 2023-06-03 08:58:00,528 INFO [Listener at localhost/37483] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43779 2023-06-03 08:58:00,536 WARN [Listener at localhost/33251] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:00,630 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x37134ca06a0c4495: Processing first storage report for DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2 from datanode e844c180-729e-49be-ab3e-30ed2e40f87e 2023-06-03 08:58:00,631 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x37134ca06a0c4495: from storage DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2 node DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-03 08:58:00,631 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x37134ca06a0c4495: Processing first storage report for DS-0e4f015a-7fb1-48f3-8f41-0dc9a40ff132 from datanode e844c180-729e-49be-ab3e-30ed2e40f87e 2023-06-03 08:58:00,631 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x37134ca06a0c4495: from storage DS-0e4f015a-7fb1-48f3-8f41-0dc9a40ff132 node DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:01,086 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@9cb1615] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:37741, datanodeUuid=165e39e4-ff1a-4e3c-8d47-956cb1c799ab, infoPort=45431, infoSecurePort=0, ipcPort=34981, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741843_1025 to 127.0.0.1:37977 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:01,086 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2ced8639] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:37741, datanodeUuid=165e39e4-ff1a-4e3c-8d47-956cb1c799ab, infoPort=45431, infoSecurePort=0, ipcPort=34981, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741853_1035 to 127.0.0.1:37977 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:01,501 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:01,501 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C40165%2C1685782651218:(num 1685782651394) roll requested 2023-06-03 08:58:01,506 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741859_1041 2023-06-03 08:58:01,506 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:01,507 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:01,507 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:01,514 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-03 08:58:01,514 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218/jenkins-hbase4.apache.org%2C40165%2C1685782651218.1685782651394 with entries=88, filesize=43.70 KB; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218/jenkins-hbase4.apache.org%2C40165%2C1685782651218.1685782681501 2023-06-03 08:58:01,514 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39903,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK], DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:58:01,515 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:01,515 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218/jenkins-hbase4.apache.org%2C40165%2C1685782651218.1685782651394 is not closed yet, will try archiving it next time 2023-06-03 08:58:01,515 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218/jenkins-hbase4.apache.org%2C40165%2C1685782651218.1685782651394; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:07,629 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5c50fc17] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741836_1012 to 127.0.0.1:37977 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:08,630 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@44ca1f2c] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741828_1004 to 127.0.0.1:39217 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:10,629 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1f21e3e6] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741827_1003 to 127.0.0.1:37977 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:10,630 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2a1fe293] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741825_1001 to 127.0.0.1:39217 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:13,630 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@bb0d27a] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741826_1002 to 127.0.0.1:39217 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:13,630 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1f979f5f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741837_1013 to 127.0.0.1:37977 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:14,630 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5ae70186] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=e844c180-729e-49be-ab3e-30ed2e40f87e, infoPort=41927, infoSecurePort=0, ipcPort=33251, storageInfo=lv=-57;cid=testClusterID;nsid=1642501972;c=1685782650553):Failed to transfer BP-106091895-172.31.14.131-1685782650553:blk_1073741831_1007 to 127.0.0.1:39217 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:18,986 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:50582 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741861_1043]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data8/current]'}, localName='127.0.0.1:37741', datanodeUuid='165e39e4-ff1a-4e3c-8d47-956cb1c799ab', xmitsInProgress=0}:Exception transfering block BP-106091895-172.31.14.131-1685782650553:blk_1073741861_1043 to mirror 127.0.0.1:39217: java.net.ConnectException: Connection refused 2023-06-03 08:58:18,986 WARN [Thread-723] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741861_1043 2023-06-03 08:58:18,986 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:50582 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741861_1043]] datanode.DataXceiver(323): 127.0.0.1:37741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50582 dst: /127.0.0.1:37741 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:18,986 WARN [Thread-723] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:58:18,987 WARN [Thread-723] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741862_1044 2023-06-03 08:58:18,988 WARN [Thread-723] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:18,998 INFO [Listener at localhost/33251] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782680022 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782698981 2023-06-03 08:58:18,998 DEBUG [Listener at localhost/33251] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39903,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK], DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK]] 2023-06-03 08:58:18,998 DEBUG [Listener at localhost/33251] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.1685782680022 is not closed yet, will try archiving it next time 2023-06-03 08:58:19,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37359] regionserver.HRegion(9158): Flush requested on 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:58:19,007 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 20d31b306b73cae32005b2495e3cef7d 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-06-03 08:58:19,008 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-03 08:58:19,014 WARN [Thread-731] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741864_1046 2023-06-03 08:58:19,015 WARN [Thread-731] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:19,026 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-03 08:58:19,026 INFO [Listener at localhost/33251] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-03 08:58:19,026 DEBUG [Listener at localhost/33251] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x73b4a6b8 to 127.0.0.1:52426 2023-06-03 08:58:19,026 DEBUG [Listener at localhost/33251] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,027 DEBUG [Listener at localhost/33251] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-03 08:58:19,027 DEBUG [Listener at localhost/33251] util.JVMClusterUtil(257): Found active master hash=1617765171, stopped=false 2023-06-03 08:58:19,027 INFO [Listener at localhost/33251] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:58:19,028 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/.tmp/info/008a0d0d65c44a7ebd1093b403edbd3c 2023-06-03 08:58:19,029 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:58:19,029 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:58:19,029 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:58:19,029 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:19,029 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:58:19,029 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:58:19,029 INFO [Listener at localhost/33251] procedure2.ProcedureExecutor(629): Stopping 2023-06-03 08:58:19,029 DEBUG [Listener at localhost/33251] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e2d83cc to 127.0.0.1:52426 2023-06-03 08:58:19,029 DEBUG [Listener at localhost/33251] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,030 INFO [Listener at localhost/33251] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37359,1685782651287' ***** 2023-06-03 08:58:19,030 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:58:19,030 INFO [Listener at localhost/33251] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 08:58:19,030 INFO [Listener at localhost/33251] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38719,1685782652542' ***** 2023-06-03 08:58:19,030 INFO [Listener at localhost/33251] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 08:58:19,030 INFO [RS:0;jenkins-hbase4:37359] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 08:58:19,030 INFO [RS:1;jenkins-hbase4:38719] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 08:58:19,031 INFO [RS:1;jenkins-hbase4:38719] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 08:58:19,031 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 08:58:19,031 INFO [RS:1;jenkins-hbase4:38719] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 08:58:19,031 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:58:19,031 DEBUG [RS:1;jenkins-hbase4:38719] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1c173c2f to 127.0.0.1:52426 2023-06-03 08:58:19,031 DEBUG [RS:1;jenkins-hbase4:38719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,031 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38719,1685782652542; all regions closed. 2023-06-03 08:58:19,032 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:58:19,036 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,037 ERROR [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... 2023-06-03 08:58:19,037 DEBUG [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,037 DEBUG [RS:1;jenkins-hbase4:38719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,037 INFO [RS:1;jenkins-hbase4:38719] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:58:19,038 INFO [RS:1;jenkins-hbase4:38719] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-03 08:58:19,038 INFO [RS:1;jenkins-hbase4:38719] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 08:58:19,038 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:58:19,038 INFO [RS:1;jenkins-hbase4:38719] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 08:58:19,038 INFO [RS:1;jenkins-hbase4:38719] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 08:58:19,038 INFO [RS:1;jenkins-hbase4:38719] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38719 2023-06-03 08:58:19,042 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:58:19,042 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38719,1685782652542 2023-06-03 08:58:19,042 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:58:19,042 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:58:19,042 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:58:19,043 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38719,1685782652542] 2023-06-03 08:58:19,043 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38719,1685782652542; numProcessing=1 2023-06-03 08:58:19,043 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/.tmp/info/008a0d0d65c44a7ebd1093b403edbd3c as hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info/008a0d0d65c44a7ebd1093b403edbd3c 2023-06-03 08:58:19,044 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38719,1685782652542 already deleted, retry=false 2023-06-03 08:58:19,044 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38719,1685782652542 expired; onlineServers=1 2023-06-03 08:58:19,049 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info/008a0d0d65c44a7ebd1093b403edbd3c, entries=8, sequenceid=25, filesize=13.2 K 2023-06-03 08:58:19,050 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 20d31b306b73cae32005b2495e3cef7d in 43ms, sequenceid=25, compaction requested=false 2023-06-03 08:58:19,050 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 20d31b306b73cae32005b2495e3cef7d: 2023-06-03 08:58:19,050 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-06-03 08:58:19,050 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 08:58:19,050 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/default/TestLogRolling-testLogRollOnDatanodeDeath/20d31b306b73cae32005b2495e3cef7d/info/008a0d0d65c44a7ebd1093b403edbd3c because midkey is the same as first or last row 2023-06-03 08:58:19,051 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 08:58:19,051 INFO [RS:0;jenkins-hbase4:37359] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 08:58:19,051 INFO [RS:0;jenkins-hbase4:37359] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 08:58:19,051 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(3303): Received CLOSE for 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:58:19,051 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(3303): Received CLOSE for 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:58:19,051 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:58:19,051 DEBUG [RS:0;jenkins-hbase4:37359] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ea76624 to 127.0.0.1:52426 2023-06-03 08:58:19,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8d0a8809e1ea4f068d76b3e5472fa4b1, disabling compactions & flushes 2023-06-03 08:58:19,051 DEBUG [RS:0;jenkins-hbase4:37359] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:58:19,052 INFO [RS:0;jenkins-hbase4:37359] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 08:58:19,052 INFO [RS:0;jenkins-hbase4:37359] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 08:58:19,052 INFO [RS:0;jenkins-hbase4:37359] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 08:58:19,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:58:19,052 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 08:58:19,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. after waiting 0 ms 2023-06-03 08:58:19,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:58:19,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8d0a8809e1ea4f068d76b3e5472fa4b1 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-03 08:58:19,052 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-03 08:58:19,052 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1478): Online Regions={8d0a8809e1ea4f068d76b3e5472fa4b1=hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1., 1588230740=hbase:meta,,1.1588230740, 20d31b306b73cae32005b2495e3cef7d=TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d.} 2023-06-03 08:58:19,052 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:58:19,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:58:19,053 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1504): Waiting on 1588230740, 20d31b306b73cae32005b2495e3cef7d, 8d0a8809e1ea4f068d76b3e5472fa4b1 2023-06-03 08:58:19,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:58:19,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:58:19,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:58:19,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-06-03 08:58:19,053 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,053 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta:.meta(num 1685782651910) roll requested 2023-06-03 08:58:19,054 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:58:19,054 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,37359,1685782651287: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,055 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-03 08:58:19,057 WARN [Thread-739] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741866_1048 2023-06-03 08:58:19,057 WARN [Thread-739] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:58:19,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-03 08:58:19,058 WARN [Thread-739] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741867_1049 2023-06-03 08:58:19,059 WARN [Thread-740] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741868_1050 2023-06-03 08:58:19,059 WARN [Thread-739] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:19,059 WARN [Thread-740] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:58:19,060 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-03 08:58:19,060 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-03 08:58:19,060 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-03 08:58:19,060 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 980942848, "init": 513802240, "max": 2051014656, "used": 349019744 }, "NonHeapMemoryUsage": { "committed": 133914624, "init": 2555904, "max": -1, "used": 131318552 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-03 08:58:19,063 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:59578 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741870_1052]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data4/current]'}, localName='127.0.0.1:39903', datanodeUuid='e844c180-729e-49be-ab3e-30ed2e40f87e', xmitsInProgress=0}:Exception transfering block BP-106091895-172.31.14.131-1685782650553:blk_1073741870_1052 to mirror 127.0.0.1:37977: java.net.ConnectException: Connection refused 2023-06-03 08:58:19,063 WARN [Thread-740] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741870_1052 2023-06-03 08:58:19,063 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2075889911_17 at /127.0.0.1:59578 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741870_1052]] datanode.DataXceiver(323): 127.0.0.1:39903:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59578 dst: /127.0.0.1:39903 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:19,064 WARN [Thread-740] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:19,066 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-03 08:58:19,066 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta.1685782651910.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta.1685782699054.meta 2023-06-03 08:58:19,067 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37741,DS-042f6136-5d07-4e66-a698-069c60376516,DISK], DatanodeInfoWithStorage[127.0.0.1:39903,DS-6a089163-c4cc-41e9-a70c-a0d9b167c4b2,DISK]] 2023-06-03 08:58:19,067 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta.1685782651910.meta is not closed yet, will try archiving it next time 2023-06-03 08:58:19,067 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,067 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287/jenkins-hbase4.apache.org%2C37359%2C1685782651287.meta.1685782651910.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,070 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40165] master.MasterRpcServices(609): jenkins-hbase4.apache.org,37359,1685782651287 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,37359,1685782651287: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36567,DS-2778efd4-db77-4e44-a349-d3e72ad644df,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:19,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/.tmp/info/ec1328f761fc475f82dbf3b97d469df0 2023-06-03 08:58:19,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/.tmp/info/ec1328f761fc475f82dbf3b97d469df0 as hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/info/ec1328f761fc475f82dbf3b97d469df0 2023-06-03 08:58:19,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/info/ec1328f761fc475f82dbf3b97d469df0, entries=2, sequenceid=6, filesize=4.8 K 2023-06-03 08:58:19,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 8d0a8809e1ea4f068d76b3e5472fa4b1 in 38ms, sequenceid=6, compaction requested=false 2023-06-03 08:58:19,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/data/hbase/namespace/8d0a8809e1ea4f068d76b3e5472fa4b1/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-03 08:58:19,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8d0a8809e1ea4f068d76b3e5472fa4b1: 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685782651969.8d0a8809e1ea4f068d76b3e5472fa4b1. 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 20d31b306b73cae32005b2495e3cef7d, disabling compactions & flushes 2023-06-03 08:58:19,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. after waiting 0 ms 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 20d31b306b73cae32005b2495e3cef7d: 2023-06-03 08:58:19,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,253 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 08:58:19,253 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(3303): Received CLOSE for 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 20d31b306b73cae32005b2495e3cef7d, disabling compactions & flushes 2023-06-03 08:58:19,253 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:58:19,253 DEBUG [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1504): Waiting on 1588230740, 20d31b306b73cae32005b2495e3cef7d 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:58:19,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. after waiting 0 ms 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-03 08:58:19,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 20d31b306b73cae32005b2495e3cef7d: 2023-06-03 08:58:19,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1685782652656.20d31b306b73cae32005b2495e3cef7d. 2023-06-03 08:58:19,328 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:58:19,328 INFO [RS:1;jenkins-hbase4:38719] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38719,1685782652542; zookeeper connection closed. 2023-06-03 08:58:19,329 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:38719-0x1008fe8642d0005, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:58:19,329 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@182830c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@182830c 2023-06-03 08:58:19,453 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-03 08:58:19,453 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37359,1685782651287; all regions closed. 2023-06-03 08:58:19,454 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:58:19,459 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/WALs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:58:19,462 DEBUG [RS:0;jenkins-hbase4:37359] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,462 INFO [RS:0;jenkins-hbase4:37359] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:58:19,463 INFO [RS:0;jenkins-hbase4:37359] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-03 08:58:19,463 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:58:19,463 INFO [RS:0;jenkins-hbase4:37359] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37359 2023-06-03 08:58:19,465 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37359,1685782651287 2023-06-03 08:58:19,465 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:58:19,467 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37359,1685782651287] 2023-06-03 08:58:19,467 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37359,1685782651287; numProcessing=2 2023-06-03 08:58:19,468 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37359,1685782651287 already deleted, retry=false 2023-06-03 08:58:19,468 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37359,1685782651287 expired; onlineServers=0 2023-06-03 08:58:19,468 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,40165,1685782651218' ***** 2023-06-03 08:58:19,468 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-03 08:58:19,468 DEBUG [M:0;jenkins-hbase4:40165] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6452abe6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:58:19,469 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:58:19,469 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40165,1685782651218; all regions closed. 2023-06-03 08:58:19,469 DEBUG [M:0;jenkins-hbase4:40165] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:58:19,469 DEBUG [M:0;jenkins-hbase4:40165] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-03 08:58:19,469 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-03 08:58:19,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782651502] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782651502,5,FailOnTimeoutGroup] 2023-06-03 08:58:19,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782651502] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782651502,5,FailOnTimeoutGroup] 2023-06-03 08:58:19,469 DEBUG [M:0;jenkins-hbase4:40165] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-03 08:58:19,470 INFO [M:0;jenkins-hbase4:40165] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-03 08:58:19,470 INFO [M:0;jenkins-hbase4:40165] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-03 08:58:19,470 INFO [M:0;jenkins-hbase4:40165] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-03 08:58:19,470 DEBUG [M:0;jenkins-hbase4:40165] master.HMaster(1512): Stopping service threads 2023-06-03 08:58:19,470 INFO [M:0;jenkins-hbase4:40165] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-03 08:58:19,471 ERROR [M:0;jenkins-hbase4:40165] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-03 08:58:19,471 INFO [M:0;jenkins-hbase4:40165] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-03 08:58:19,471 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-03 08:58:19,472 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-03 08:58:19,472 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:19,472 DEBUG [M:0;jenkins-hbase4:40165] zookeeper.ZKUtil(398): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-03 08:58:19,472 WARN [M:0;jenkins-hbase4:40165] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-03 08:58:19,472 INFO [M:0;jenkins-hbase4:40165] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-03 08:58:19,472 INFO [M:0;jenkins-hbase4:40165] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-03 08:58:19,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:58:19,473 DEBUG [M:0;jenkins-hbase4:40165] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:58:19,473 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:19,473 DEBUG [M:0;jenkins-hbase4:40165] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:19,473 DEBUG [M:0;jenkins-hbase4:40165] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:58:19,473 DEBUG [M:0;jenkins-hbase4:40165] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:19,473 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.07 KB heapSize=45.73 KB 2023-06-03 08:58:19,481 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:50652 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741872_1054]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data8/current]'}, localName='127.0.0.1:37741', datanodeUuid='165e39e4-ff1a-4e3c-8d47-956cb1c799ab', xmitsInProgress=0}:Exception transfering block BP-106091895-172.31.14.131-1685782650553:blk_1073741872_1054 to mirror 127.0.0.1:37977: java.net.ConnectException: Connection refused 2023-06-03 08:58:19,481 WARN [Thread-755] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741872_1054 2023-06-03 08:58:19,481 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-996418912_17 at /127.0.0.1:50652 [Receiving block BP-106091895-172.31.14.131-1685782650553:blk_1073741872_1054]] datanode.DataXceiver(323): 127.0.0.1:37741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50652 dst: /127.0.0.1:37741 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:19,481 WARN [Thread-755] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37977,DS-db2dee29-eee7-45df-b9c8-8b66eced9c47,DISK] 2023-06-03 08:58:19,482 WARN [Thread-755] hdfs.DataStreamer(1658): Abandoning BP-106091895-172.31.14.131-1685782650553:blk_1073741873_1055 2023-06-03 08:58:19,483 WARN [Thread-755] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39217,DS-47bc18bc-5004-40a6-bb8f-c7db108f408c,DISK] 2023-06-03 08:58:19,488 INFO [M:0;jenkins-hbase4:40165] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.07 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/594009cf704749bc91a940b7914cda4b 2023-06-03 08:58:19,494 DEBUG [M:0;jenkins-hbase4:40165] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/594009cf704749bc91a940b7914cda4b as hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/594009cf704749bc91a940b7914cda4b 2023-06-03 08:58:19,499 INFO [M:0;jenkins-hbase4:40165] regionserver.HStore(1080): Added hdfs://localhost:35767/user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/594009cf704749bc91a940b7914cda4b, entries=11, sequenceid=92, filesize=7.0 K 2023-06-03 08:58:19,500 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegion(2948): Finished flush of dataSize ~38.07 KB/38985, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=92, compaction requested=false 2023-06-03 08:58:19,501 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:19,501 DEBUG [M:0;jenkins-hbase4:40165] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:58:19,501 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/5cac990d-7cfc-bd98-33c5-9b322ec19225/MasterData/WALs/jenkins-hbase4.apache.org,40165,1685782651218 2023-06-03 08:58:19,504 INFO [M:0;jenkins-hbase4:40165] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-03 08:58:19,504 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:58:19,504 INFO [M:0;jenkins-hbase4:40165] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40165 2023-06-03 08:58:19,506 DEBUG [M:0;jenkins-hbase4:40165] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40165,1685782651218 already deleted, retry=false 2023-06-03 08:58:19,611 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:58:19,629 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:58:19,629 INFO [M:0;jenkins-hbase4:40165] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40165,1685782651218; zookeeper connection closed. 2023-06-03 08:58:19,629 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): master:40165-0x1008fe8642d0000, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:58:19,729 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:58:19,729 INFO [RS:0;jenkins-hbase4:37359] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37359,1685782651287; zookeeper connection closed. 2023-06-03 08:58:19,729 DEBUG [Listener at localhost/42185-EventThread] zookeeper.ZKWatcher(600): regionserver:37359-0x1008fe8642d0001, quorum=127.0.0.1:52426, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:58:19,730 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d8fe994] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d8fe994 2023-06-03 08:58:19,730 INFO [Listener at localhost/33251] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-03 08:58:19,730 WARN [Listener at localhost/33251] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:58:19,734 INFO [Listener at localhost/33251] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:19,837 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:58:19,837 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106091895-172.31.14.131-1685782650553 (Datanode Uuid e844c180-729e-49be-ab3e-30ed2e40f87e) service to localhost/127.0.0.1:35767 2023-06-03 08:58:19,838 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data3/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:19,838 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data4/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:19,840 WARN [Listener at localhost/33251] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:58:19,843 INFO [Listener at localhost/33251] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:19,946 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:58:19,946 WARN [BP-106091895-172.31.14.131-1685782650553 heartbeating to localhost/127.0.0.1:35767] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106091895-172.31.14.131-1685782650553 (Datanode Uuid 165e39e4-ff1a-4e3c-8d47-956cb1c799ab) service to localhost/127.0.0.1:35767 2023-06-03 08:58:19,947 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data7/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:19,947 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/cluster_07bcf2ed-d9a0-ec8e-45f3-0b645fa5de68/dfs/data/data8/current/BP-106091895-172.31.14.131-1685782650553] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:19,958 INFO [Listener at localhost/33251] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:20,074 INFO [Listener at localhost/33251] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-03 08:58:20,104 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-03 08:58:20,115 INFO [Listener at localhost/33251] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=75 (was 52) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:35767 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35767 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:35767 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:35767 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:35767 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33251 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:35767 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=459 (was 439) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=76 (was 111), ProcessCount=169 (was 170), AvailableMemoryMB=1551 (was 2140) 2023-06-03 08:58:20,124 INFO [Listener at localhost/33251] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=75, OpenFileDescriptor=459, MaxFileDescriptor=60000, SystemLoadAverage=76, ProcessCount=169, AvailableMemoryMB=1550 2023-06-03 08:58:20,124 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-03 08:58:20,124 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/hadoop.log.dir so I do NOT create it in target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf 2023-06-03 08:58:20,124 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/58a7a99c-8f43-e314-70eb-beec9647dd85/hadoop.tmp.dir so I do NOT create it in target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf 2023-06-03 08:58:20,124 INFO [Listener at localhost/33251] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60, deleteOnExit=true 2023-06-03 08:58:20,124 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/test.cache.data in system properties and HBase conf 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/hadoop.tmp.dir in system properties and HBase conf 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/hadoop.log.dir in system properties and HBase conf 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-03 08:58:20,125 DEBUG [Listener at localhost/33251] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:58:20,125 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/nfs.dump.dir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir in system properties and HBase conf 2023-06-03 08:58:20,126 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:58:20,127 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-03 08:58:20,127 INFO [Listener at localhost/33251] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-03 08:58:20,128 WARN [Listener at localhost/33251] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:58:20,131 WARN [Listener at localhost/33251] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:58:20,131 WARN [Listener at localhost/33251] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:58:20,172 WARN [Listener at localhost/33251] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:20,173 INFO [Listener at localhost/33251] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:20,178 INFO [Listener at localhost/33251] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_34801_hdfs____3eerc2/webapp 2023-06-03 08:58:20,268 INFO [Listener at localhost/33251] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34801 2023-06-03 08:58:20,269 WARN [Listener at localhost/33251] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:58:20,272 WARN [Listener at localhost/33251] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:58:20,272 WARN [Listener at localhost/33251] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:58:20,314 WARN [Listener at localhost/35813] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:20,325 WARN [Listener at localhost/35813] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:20,327 WARN [Listener at localhost/35813] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:20,328 INFO [Listener at localhost/35813] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:20,334 INFO [Listener at localhost/35813] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_34997_datanode____14lwql/webapp 2023-06-03 08:58:20,425 INFO [Listener at localhost/35813] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34997 2023-06-03 08:58:20,431 WARN [Listener at localhost/46465] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:20,445 WARN [Listener at localhost/46465] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:20,447 WARN [Listener at localhost/46465] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:20,448 INFO [Listener at localhost/46465] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:20,451 INFO [Listener at localhost/46465] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_33607_datanode____a0v3yc/webapp 2023-06-03 08:58:20,541 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7f80d4c908c7358c: Processing first storage report for DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484 from datanode 432754c7-bf16-47e8-9c50-fffff622125f 2023-06-03 08:58:20,541 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7f80d4c908c7358c: from storage DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484 node DatanodeRegistration(127.0.0.1:43659, datanodeUuid=432754c7-bf16-47e8-9c50-fffff622125f, infoPort=35357, infoSecurePort=0, ipcPort=46465, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:20,541 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7f80d4c908c7358c: Processing first storage report for DS-d54309db-b77d-4c95-acd7-864d3101926c from datanode 432754c7-bf16-47e8-9c50-fffff622125f 2023-06-03 08:58:20,541 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7f80d4c908c7358c: from storage DS-d54309db-b77d-4c95-acd7-864d3101926c node DatanodeRegistration(127.0.0.1:43659, datanodeUuid=432754c7-bf16-47e8-9c50-fffff622125f, infoPort=35357, infoSecurePort=0, ipcPort=46465, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:20,553 INFO [Listener at localhost/46465] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33607 2023-06-03 08:58:20,561 WARN [Listener at localhost/41967] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:20,617 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:58:20,658 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b55c92fc14b8fcb: Processing first storage report for DS-dc8771dc-4912-47c9-bf8a-1dc714a02252 from datanode a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3 2023-06-03 08:58:20,658 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b55c92fc14b8fcb: from storage DS-dc8771dc-4912-47c9-bf8a-1dc714a02252 node DatanodeRegistration(127.0.0.1:35257, datanodeUuid=a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3, infoPort=46819, infoSecurePort=0, ipcPort=41967, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:20,658 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b55c92fc14b8fcb: Processing first storage report for DS-1005a0e1-3f13-4db6-bbe8-4f025e44de98 from datanode a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3 2023-06-03 08:58:20,658 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b55c92fc14b8fcb: from storage DS-1005a0e1-3f13-4db6-bbe8-4f025e44de98 node DatanodeRegistration(127.0.0.1:35257, datanodeUuid=a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3, infoPort=46819, infoSecurePort=0, ipcPort=41967, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:20,671 DEBUG [Listener at localhost/41967] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf 2023-06-03 08:58:20,673 INFO [Listener at localhost/41967] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/zookeeper_0, clientPort=57782, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-03 08:58:20,674 INFO [Listener at localhost/41967] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57782 2023-06-03 08:58:20,674 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,675 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,689 INFO [Listener at localhost/41967] util.FSUtils(471): Created version file at hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62 with version=8 2023-06-03 08:58:20,689 INFO [Listener at localhost/41967] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase-staging 2023-06-03 08:58:20,691 INFO [Listener at localhost/41967] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:58:20,691 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:58:20,692 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:58:20,692 INFO [Listener at localhost/41967] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:58:20,692 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:58:20,692 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:58:20,692 INFO [Listener at localhost/41967] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:58:20,693 INFO [Listener at localhost/41967] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36435 2023-06-03 08:58:20,694 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,695 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,696 INFO [Listener at localhost/41967] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36435 connecting to ZooKeeper ensemble=127.0.0.1:57782 2023-06-03 08:58:20,702 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:364350x0, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:58:20,703 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36435-0x1008fe925840000 connected 2023-06-03 08:58:20,717 DEBUG [Listener at localhost/41967] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:58:20,718 DEBUG [Listener at localhost/41967] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:58:20,718 DEBUG [Listener at localhost/41967] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:58:20,718 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36435 2023-06-03 08:58:20,719 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36435 2023-06-03 08:58:20,719 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36435 2023-06-03 08:58:20,719 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36435 2023-06-03 08:58:20,719 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36435 2023-06-03 08:58:20,719 INFO [Listener at localhost/41967] master.HMaster(444): hbase.rootdir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62, hbase.cluster.distributed=false 2023-06-03 08:58:20,732 INFO [Listener at localhost/41967] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:58:20,733 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:58:20,733 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:58:20,733 INFO [Listener at localhost/41967] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:58:20,733 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:58:20,733 INFO [Listener at localhost/41967] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:58:20,733 INFO [Listener at localhost/41967] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:58:20,735 INFO [Listener at localhost/41967] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46097 2023-06-03 08:58:20,736 INFO [Listener at localhost/41967] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 08:58:20,737 DEBUG [Listener at localhost/41967] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 08:58:20,737 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,739 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,740 INFO [Listener at localhost/41967] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46097 connecting to ZooKeeper ensemble=127.0.0.1:57782 2023-06-03 08:58:20,742 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:460970x0, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:58:20,743 DEBUG [Listener at localhost/41967] zookeeper.ZKUtil(164): regionserver:460970x0, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:58:20,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46097-0x1008fe925840001 connected 2023-06-03 08:58:20,744 DEBUG [Listener at localhost/41967] zookeeper.ZKUtil(164): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:58:20,745 DEBUG [Listener at localhost/41967] zookeeper.ZKUtil(164): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:58:20,745 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46097 2023-06-03 08:58:20,745 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46097 2023-06-03 08:58:20,745 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46097 2023-06-03 08:58:20,746 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46097 2023-06-03 08:58:20,746 DEBUG [Listener at localhost/41967] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46097 2023-06-03 08:58:20,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,748 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:58:20,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,751 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:58:20,751 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:58:20,751 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:20,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:58:20,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36435,1685782700691 from backup master directory 2023-06-03 08:58:20,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:58:20,753 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,754 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:58:20,754 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:58:20,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/hbase.id with ID: 61052c78-f464-44fc-8416-90c5a703f11d 2023-06-03 08:58:20,776 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:20,778 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:20,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x29157a56 to 127.0.0.1:57782 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:58:20,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@720b2af2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:58:20,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:58:20,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-03 08:58:20,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:58:20,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store-tmp 2023-06-03 08:58:20,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:20,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:58:20,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:20,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:20,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:58:20,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:20,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:58:20,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:58:20,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36435%2C1685782700691, suffix=, logDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691, archiveDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/oldWALs, maxLogs=10 2023-06-03 08:58:20,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691/jenkins-hbase4.apache.org%2C36435%2C1685782700691.1685782700802 2023-06-03 08:58:20,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] 2023-06-03 08:58:20,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:58:20,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:20,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:58:20,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:58:20,814 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:58:20,816 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-03 08:58:20,816 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-03 08:58:20,817 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:20,818 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:58:20,818 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:58:20,821 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:58:20,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:58:20,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=827151, jitterRate=0.051777973771095276}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:58:20,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:58:20,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-03 08:58:20,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-03 08:58:20,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-03 08:58:20,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-03 08:58:20,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-03 08:58:20,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-03 08:58:20,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-03 08:58:20,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-03 08:58:20,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-03 08:58:20,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-03 08:58:20,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-03 08:58:20,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-03 08:58:20,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-03 08:58:20,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-03 08:58:20,842 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:20,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-03 08:58:20,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-03 08:58:20,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-03 08:58:20,845 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:58:20,845 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:58:20,845 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:20,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36435,1685782700691, sessionid=0x1008fe925840000, setting cluster-up flag (Was=false) 2023-06-03 08:58:20,849 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:20,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-03 08:58:20,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,859 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:20,863 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-03 08:58:20,864 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:20,865 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.hbase-snapshot/.tmp 2023-06-03 08:58:20,868 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:58:20,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685782730872 2023-06-03 08:58:20,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-03 08:58:20,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-03 08:58:20,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-03 08:58:20,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-03 08:58:20,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-03 08:58:20,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-03 08:58:20,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,876 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:58:20,877 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-03 08:58:20,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-03 08:58:20,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-03 08:58:20,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-03 08:58:20,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-03 08:58:20,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-03 08:58:20,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782700878,5,FailOnTimeoutGroup] 2023-06-03 08:58:20,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782700878,5,FailOnTimeoutGroup] 2023-06-03 08:58:20,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-03 08:58:20,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,878 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:58:20,891 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:58:20,891 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:58:20,891 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62 2023-06-03 08:58:20,913 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:20,914 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:58:20,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/info 2023-06-03 08:58:20,916 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:58:20,916 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:20,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:58:20,918 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:58:20,918 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:58:20,919 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:20,919 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:58:20,920 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/table 2023-06-03 08:58:20,921 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:58:20,921 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:20,922 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740 2023-06-03 08:58:20,922 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740 2023-06-03 08:58:20,924 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:58:20,926 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:58:20,928 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:58:20,928 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=715487, jitterRate=-0.09021240472793579}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:58:20,928 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:58:20,928 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:58:20,928 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:58:20,928 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:58:20,928 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:58:20,928 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:58:20,929 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 08:58:20,929 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:58:20,930 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:58:20,930 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-03 08:58:20,930 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-03 08:58:20,932 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-03 08:58:20,933 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-03 08:58:20,948 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(951): ClusterId : 61052c78-f464-44fc-8416-90c5a703f11d 2023-06-03 08:58:20,949 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 08:58:20,952 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 08:58:20,952 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 08:58:20,954 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 08:58:20,955 DEBUG [RS:0;jenkins-hbase4:46097] zookeeper.ReadOnlyZKClient(139): Connect 0x4c48af43 to 127.0.0.1:57782 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:58:20,959 DEBUG [RS:0;jenkins-hbase4:46097] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48772212, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:58:20,960 DEBUG [RS:0;jenkins-hbase4:46097] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d75a578, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:58:20,968 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46097 2023-06-03 08:58:20,968 INFO [RS:0;jenkins-hbase4:46097] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 08:58:20,968 INFO [RS:0;jenkins-hbase4:46097] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 08:58:20,968 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 08:58:20,969 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36435,1685782700691 with isa=jenkins-hbase4.apache.org/172.31.14.131:46097, startcode=1685782700732 2023-06-03 08:58:20,969 DEBUG [RS:0;jenkins-hbase4:46097] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 08:58:20,972 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52489, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 08:58:20,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:20,974 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62 2023-06-03 08:58:20,974 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35813 2023-06-03 08:58:20,974 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 08:58:20,976 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:58:20,976 DEBUG [RS:0;jenkins-hbase4:46097] zookeeper.ZKUtil(162): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:20,976 WARN [RS:0;jenkins-hbase4:46097] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:58:20,976 INFO [RS:0;jenkins-hbase4:46097] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:58:20,977 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:20,977 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46097,1685782700732] 2023-06-03 08:58:20,980 DEBUG [RS:0;jenkins-hbase4:46097] zookeeper.ZKUtil(162): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:20,981 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 08:58:20,981 INFO [RS:0;jenkins-hbase4:46097] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 08:58:20,985 INFO [RS:0;jenkins-hbase4:46097] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 08:58:20,985 INFO [RS:0;jenkins-hbase4:46097] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 08:58:20,985 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,985 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 08:58:20,987 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,987 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,987 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,987 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,987 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,987 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,988 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:58:20,988 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,988 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,988 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,988 DEBUG [RS:0;jenkins-hbase4:46097] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:58:20,990 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,990 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:20,990 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,001 INFO [RS:0;jenkins-hbase4:46097] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 08:58:21,001 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46097,1685782700732-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,013 INFO [RS:0;jenkins-hbase4:46097] regionserver.Replication(203): jenkins-hbase4.apache.org,46097,1685782700732 started 2023-06-03 08:58:21,013 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46097,1685782700732, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46097, sessionid=0x1008fe925840001 2023-06-03 08:58:21,013 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 08:58:21,013 DEBUG [RS:0;jenkins-hbase4:46097] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:21,013 DEBUG [RS:0;jenkins-hbase4:46097] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46097,1685782700732' 2023-06-03 08:58:21,013 DEBUG [RS:0;jenkins-hbase4:46097] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:58:21,013 DEBUG [RS:0;jenkins-hbase4:46097] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:58:21,014 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 08:58:21,014 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 08:58:21,014 DEBUG [RS:0;jenkins-hbase4:46097] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:21,014 DEBUG [RS:0;jenkins-hbase4:46097] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46097,1685782700732' 2023-06-03 08:58:21,014 DEBUG [RS:0;jenkins-hbase4:46097] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 08:58:21,014 DEBUG [RS:0;jenkins-hbase4:46097] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 08:58:21,015 DEBUG [RS:0;jenkins-hbase4:46097] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 08:58:21,015 INFO [RS:0;jenkins-hbase4:46097] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 08:58:21,015 INFO [RS:0;jenkins-hbase4:46097] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 08:58:21,083 DEBUG [jenkins-hbase4:36435] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-03 08:58:21,084 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46097,1685782700732, state=OPENING 2023-06-03 08:58:21,086 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-03 08:58:21,087 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:21,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46097,1685782700732}] 2023-06-03 08:58:21,088 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:58:21,117 INFO [RS:0;jenkins-hbase4:46097] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46097%2C1685782700732, suffix=, logDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732, archiveDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/oldWALs, maxLogs=32 2023-06-03 08:58:21,132 INFO [RS:0;jenkins-hbase4:46097] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 2023-06-03 08:58:21,132 DEBUG [RS:0;jenkins-hbase4:46097] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] 2023-06-03 08:58:21,243 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:21,243 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-03 08:58:21,245 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35142, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-03 08:58:21,249 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-03 08:58:21,249 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:58:21,251 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta, suffix=.meta, logDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732, archiveDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/oldWALs, maxLogs=32 2023-06-03 08:58:21,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta.1685782701252.meta 2023-06-03 08:58:21,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] 2023-06-03 08:58:21,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:58:21,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-03 08:58:21,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-03 08:58:21,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-03 08:58:21,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-03 08:58:21,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:21,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-03 08:58:21,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-03 08:58:21,268 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:58:21,269 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/info 2023-06-03 08:58:21,270 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/info 2023-06-03 08:58:21,270 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:58:21,271 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:21,271 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:58:21,272 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:58:21,272 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:58:21,272 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:58:21,272 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:21,272 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:58:21,273 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/table 2023-06-03 08:58:21,273 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740/table 2023-06-03 08:58:21,274 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:58:21,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:21,275 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740 2023-06-03 08:58:21,276 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/meta/1588230740 2023-06-03 08:58:21,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:58:21,280 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:58:21,281 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=703576, jitterRate=-0.10535697638988495}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:58:21,281 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:58:21,283 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685782701243 2023-06-03 08:58:21,287 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-03 08:58:21,288 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-03 08:58:21,289 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46097,1685782700732, state=OPEN 2023-06-03 08:58:21,291 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-03 08:58:21,291 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:58:21,293 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-03 08:58:21,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46097,1685782700732 in 203 msec 2023-06-03 08:58:21,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-03 08:58:21,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 364 msec 2023-06-03 08:58:21,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 431 msec 2023-06-03 08:58:21,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685782701299, completionTime=-1 2023-06-03 08:58:21,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-03 08:58:21,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-03 08:58:21,302 DEBUG [hconnection-0xf7f86dc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:58:21,304 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35154, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:58:21,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-03 08:58:21,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685782761306 2023-06-03 08:58:21,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685782821306 2023-06-03 08:58:21,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-03 08:58:21,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1685782700691-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1685782700691-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1685782700691-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36435, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-03 08:58:21,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-03 08:58:21,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:58:21,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-03 08:58:21,316 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-03 08:58:21,318 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:58:21,319 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:58:21,326 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,327 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138 empty. 2023-06-03 08:58:21,328 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,328 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-03 08:58:21,339 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-03 08:58:21,340 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b8dd7b85738eb523dce6fc9be7367138, NAME => 'hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp 2023-06-03 08:58:21,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:21,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b8dd7b85738eb523dce6fc9be7367138, disabling compactions & flushes 2023-06-03 08:58:21,348 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. after waiting 0 ms 2023-06-03 08:58:21,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,348 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b8dd7b85738eb523dce6fc9be7367138: 2023-06-03 08:58:21,350 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:58:21,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782701351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782701351"}]},"ts":"1685782701351"} 2023-06-03 08:58:21,354 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:58:21,354 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:58:21,355 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782701355"}]},"ts":"1685782701355"} 2023-06-03 08:58:21,356 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-03 08:58:21,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b8dd7b85738eb523dce6fc9be7367138, ASSIGN}] 2023-06-03 08:58:21,364 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b8dd7b85738eb523dce6fc9be7367138, ASSIGN 2023-06-03 08:58:21,365 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b8dd7b85738eb523dce6fc9be7367138, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46097,1685782700732; forceNewPlan=false, retain=false 2023-06-03 08:58:21,516 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b8dd7b85738eb523dce6fc9be7367138, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:21,517 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782701516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782701516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782701516"}]},"ts":"1685782701516"} 2023-06-03 08:58:21,519 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure b8dd7b85738eb523dce6fc9be7367138, server=jenkins-hbase4.apache.org,46097,1685782700732}] 2023-06-03 08:58:21,675 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b8dd7b85738eb523dce6fc9be7367138, NAME => 'hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:58:21,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:21,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,677 INFO [StoreOpener-b8dd7b85738eb523dce6fc9be7367138-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,678 DEBUG [StoreOpener-b8dd7b85738eb523dce6fc9be7367138-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138/info 2023-06-03 08:58:21,678 DEBUG [StoreOpener-b8dd7b85738eb523dce6fc9be7367138-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138/info 2023-06-03 08:58:21,678 INFO [StoreOpener-b8dd7b85738eb523dce6fc9be7367138-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b8dd7b85738eb523dce6fc9be7367138 columnFamilyName info 2023-06-03 08:58:21,679 INFO [StoreOpener-b8dd7b85738eb523dce6fc9be7367138-1] regionserver.HStore(310): Store=b8dd7b85738eb523dce6fc9be7367138/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:21,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:58:21,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/hbase/namespace/b8dd7b85738eb523dce6fc9be7367138/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:58:21,685 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b8dd7b85738eb523dce6fc9be7367138; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=814941, jitterRate=0.036252155900001526}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:58:21,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b8dd7b85738eb523dce6fc9be7367138: 2023-06-03 08:58:21,686 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138., pid=6, masterSystemTime=1685782701671 2023-06-03 08:58:21,689 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,689 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:58:21,689 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b8dd7b85738eb523dce6fc9be7367138, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:21,690 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782701689"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782701689"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782701689"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782701689"}]},"ts":"1685782701689"} 2023-06-03 08:58:21,694 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-03 08:58:21,694 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure b8dd7b85738eb523dce6fc9be7367138, server=jenkins-hbase4.apache.org,46097,1685782700732 in 172 msec 2023-06-03 08:58:21,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-03 08:58:21,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b8dd7b85738eb523dce6fc9be7367138, ASSIGN in 332 msec 2023-06-03 08:58:21,697 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:58:21,697 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782701697"}]},"ts":"1685782701697"} 2023-06-03 08:58:21,699 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-03 08:58:21,702 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:58:21,704 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 388 msec 2023-06-03 08:58:21,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-03 08:58:21,718 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:58:21,718 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:21,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-03 08:58:21,730 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:58:21,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-06-03 08:58:21,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-03 08:58:21,752 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:58:21,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-03 08:58:21,768 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-03 08:58:21,771 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-03 08:58:21,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.018sec 2023-06-03 08:58:21,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-03 08:58:21,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-03 08:58:21,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-03 08:58:21,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1685782700691-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-03 08:58:21,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1685782700691-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-03 08:58:21,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-03 08:58:21,848 DEBUG [Listener at localhost/41967] zookeeper.ReadOnlyZKClient(139): Connect 0x15908d76 to 127.0.0.1:57782 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:58:21,852 DEBUG [Listener at localhost/41967] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c27c958, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:58:21,853 DEBUG [hconnection-0x5cc4ce9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:58:21,855 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35166, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:58:21,857 INFO [Listener at localhost/41967] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:58:21,857 INFO [Listener at localhost/41967] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:58:21,861 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-03 08:58:21,861 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:58:21,862 INFO [Listener at localhost/41967] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-03 08:58:21,862 INFO [Listener at localhost/41967] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-03 08:58:21,862 INFO [Listener at localhost/41967] wal.TestLogRolling(432): Replication=2 2023-06-03 08:58:21,864 DEBUG [Listener at localhost/41967] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-03 08:58:21,866 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55204, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-03 08:58:21,868 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-03 08:58:21,868 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-03 08:58:21,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:58:21,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-03 08:58:21,872 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:58:21,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-03 08:58:21,873 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:58:21,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:58:21,875 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:21,875 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05 empty. 2023-06-03 08:58:21,876 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:21,876 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-03 08:58:21,888 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-03 08:58:21,889 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4549313c44dac3f8378988dad5edbf05, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/.tmp 2023-06-03 08:58:21,899 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:21,899 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 4549313c44dac3f8378988dad5edbf05, disabling compactions & flushes 2023-06-03 08:58:21,899 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:21,900 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:21,900 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. after waiting 0 ms 2023-06-03 08:58:21,900 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:21,900 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:21,900 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 4549313c44dac3f8378988dad5edbf05: 2023-06-03 08:58:21,902 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:58:21,903 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685782701903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782701903"}]},"ts":"1685782701903"} 2023-06-03 08:58:21,905 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:58:21,906 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:58:21,906 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782701906"}]},"ts":"1685782701906"} 2023-06-03 08:58:21,907 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-03 08:58:21,911 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=4549313c44dac3f8378988dad5edbf05, ASSIGN}] 2023-06-03 08:58:21,913 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=4549313c44dac3f8378988dad5edbf05, ASSIGN 2023-06-03 08:58:21,914 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=4549313c44dac3f8378988dad5edbf05, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46097,1685782700732; forceNewPlan=false, retain=false 2023-06-03 08:58:22,065 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4549313c44dac3f8378988dad5edbf05, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:22,065 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685782702065"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782702065"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782702065"}]},"ts":"1685782702065"} 2023-06-03 08:58:22,068 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 4549313c44dac3f8378988dad5edbf05, server=jenkins-hbase4.apache.org,46097,1685782700732}] 2023-06-03 08:58:22,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:22,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4549313c44dac3f8378988dad5edbf05, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:58:22,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:58:22,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,226 INFO [StoreOpener-4549313c44dac3f8378988dad5edbf05-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,227 DEBUG [StoreOpener-4549313c44dac3f8378988dad5edbf05-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05/info 2023-06-03 08:58:22,228 DEBUG [StoreOpener-4549313c44dac3f8378988dad5edbf05-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05/info 2023-06-03 08:58:22,228 INFO [StoreOpener-4549313c44dac3f8378988dad5edbf05-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4549313c44dac3f8378988dad5edbf05 columnFamilyName info 2023-06-03 08:58:22,228 INFO [StoreOpener-4549313c44dac3f8378988dad5edbf05-1] regionserver.HStore(310): Store=4549313c44dac3f8378988dad5edbf05/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:58:22,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:58:22,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/data/default/TestLogRolling-testLogRollOnPipelineRestart/4549313c44dac3f8378988dad5edbf05/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:58:22,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4549313c44dac3f8378988dad5edbf05; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=767609, jitterRate=-0.02393469214439392}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:58:22,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4549313c44dac3f8378988dad5edbf05: 2023-06-03 08:58:22,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05., pid=11, masterSystemTime=1685782702221 2023-06-03 08:58:22,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:22,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:22,238 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4549313c44dac3f8378988dad5edbf05, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:58:22,238 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685782702238"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782702238"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782702238"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782702238"}]},"ts":"1685782702238"} 2023-06-03 08:58:22,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-03 08:58:22,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 4549313c44dac3f8378988dad5edbf05, server=jenkins-hbase4.apache.org,46097,1685782700732 in 172 msec 2023-06-03 08:58:22,244 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-03 08:58:22,244 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=4549313c44dac3f8378988dad5edbf05, ASSIGN in 331 msec 2023-06-03 08:58:22,245 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:58:22,245 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782702245"}]},"ts":"1685782702245"} 2023-06-03 08:58:22,247 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-03 08:58:22,249 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:58:22,251 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 381 msec 2023-06-03 08:58:24,639 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-03 08:58:26,981 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-03 08:58:26,982 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-03 08:58:31,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:58:31,875 INFO [Listener at localhost/41967] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-03 08:58:31,877 DEBUG [Listener at localhost/41967] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-03 08:58:31,878 DEBUG [Listener at localhost/41967] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:58:33,884 INFO [Listener at localhost/41967] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 2023-06-03 08:58:33,885 WARN [Listener at localhost/41967] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:58:33,886 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:33,887 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:33,887 WARN [DataStreamer for file /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta.1685782701252.meta block BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]) is bad. 2023-06-03 08:58:33,887 WARN [DataStreamer for file /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 block BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]) is bad. 2023-06-03 08:58:33,887 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:33,888 WARN [DataStreamer for file /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691/jenkins-hbase4.apache.org%2C36435%2C1685782700691.1685782700802 block BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35257,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]) is bad. 2023-06-03 08:58:33,893 INFO [Listener at localhost/41967] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:33,899 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:32808 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43659:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32808 dst: /127.0.0.1:43659 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43659 remote=/127.0.0.1:32808]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,900 WARN [PacketResponder: BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43659]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,899 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:32818 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43659:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32818 dst: /127.0.0.1:43659 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43659 remote=/127.0.0.1:32818]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,907 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:32772 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35257:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32772 dst: /127.0.0.1:35257 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,907 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:58:33,902 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1676028266_17 at /127.0.0.1:60968 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35257:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60968 dst: /127.0.0.1:35257 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,901 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:32780 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35257:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32780 dst: /127.0.0.1:35257 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,900 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1676028266_17 at /127.0.0.1:32772 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43659:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32772 dst: /127.0.0.1:43659 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43659 remote=/127.0.0.1:32772]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:33,908 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-182810667-172.31.14.131-1685782700134 (Datanode Uuid a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3) service to localhost/127.0.0.1:35813 2023-06-03 08:58:33,912 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data3/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:33,912 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data4/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:33,919 WARN [Listener at localhost/41967] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:33,921 WARN [Listener at localhost/41967] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:33,922 INFO [Listener at localhost/41967] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:33,927 INFO [Listener at localhost/41967] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_44563_datanode____.vq612f/webapp 2023-06-03 08:58:34,017 INFO [Listener at localhost/41967] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44563 2023-06-03 08:58:34,024 WARN [Listener at localhost/33115] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:34,084 WARN [Listener at localhost/33115] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:58:34,086 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:34,086 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:34,086 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:34,091 INFO [Listener at localhost/33115] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:34,156 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7091f91f68dd72de: Processing first storage report for DS-dc8771dc-4912-47c9-bf8a-1dc714a02252 from datanode a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3 2023-06-03 08:58:34,157 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7091f91f68dd72de: from storage DS-dc8771dc-4912-47c9-bf8a-1dc714a02252 node DatanodeRegistration(127.0.0.1:45627, datanodeUuid=a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3, infoPort=40425, infoSecurePort=0, ipcPort=33115, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-03 08:58:34,157 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7091f91f68dd72de: Processing first storage report for DS-1005a0e1-3f13-4db6-bbe8-4f025e44de98 from datanode a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3 2023-06-03 08:58:34,157 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7091f91f68dd72de: from storage DS-1005a0e1-3f13-4db6-bbe8-4f025e44de98 node DatanodeRegistration(127.0.0.1:45627, datanodeUuid=a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3, infoPort=40425, infoSecurePort=0, ipcPort=33115, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:34,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1676028266_17 at /127.0.0.1:36878 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43659:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36878 dst: /127.0.0.1:43659 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:34,196 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:58:34,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:36862 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43659:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36862 dst: /127.0.0.1:43659 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:34,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:36856 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43659:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36856 dst: /127.0.0.1:43659 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:34,196 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-182810667-172.31.14.131-1685782700134 (Datanode Uuid 432754c7-bf16-47e8-9c50-fffff622125f) service to localhost/127.0.0.1:35813 2023-06-03 08:58:34,198 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:34,198 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data2/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:34,204 WARN [Listener at localhost/33115] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:34,209 WARN [Listener at localhost/33115] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:34,210 INFO [Listener at localhost/33115] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:34,215 INFO [Listener at localhost/33115] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_39691_datanode____.fqmyr5/webapp 2023-06-03 08:58:34,305 INFO [Listener at localhost/33115] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39691 2023-06-03 08:58:34,314 WARN [Listener at localhost/42641] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:34,390 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3c8aa5dba201a287: Processing first storage report for DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484 from datanode 432754c7-bf16-47e8-9c50-fffff622125f 2023-06-03 08:58:34,390 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3c8aa5dba201a287: from storage DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484 node DatanodeRegistration(127.0.0.1:36303, datanodeUuid=432754c7-bf16-47e8-9c50-fffff622125f, infoPort=34179, infoSecurePort=0, ipcPort=42641, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-03 08:58:34,391 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3c8aa5dba201a287: Processing first storage report for DS-d54309db-b77d-4c95-acd7-864d3101926c from datanode 432754c7-bf16-47e8-9c50-fffff622125f 2023-06-03 08:58:34,391 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3c8aa5dba201a287: from storage DS-d54309db-b77d-4c95-acd7-864d3101926c node DatanodeRegistration(127.0.0.1:36303, datanodeUuid=432754c7-bf16-47e8-9c50-fffff622125f, infoPort=34179, infoSecurePort=0, ipcPort=42641, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:35,317 INFO [Listener at localhost/42641] wal.TestLogRolling(481): Data Nodes restarted 2023-06-03 08:58:35,319 INFO [Listener at localhost/42641] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-03 08:58:35,320 WARN [RS:0;jenkins-hbase4:46097.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:35,321 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46097%2C1685782700732:(num 1685782701118) roll requested 2023-06-03 08:58:35,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46097] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:35,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46097] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:35166 deadline: 1685782725320, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-03 08:58:35,330 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 newFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 2023-06-03 08:58:35,330 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-03 08:58:35,330 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 2023-06-03 08:58:35,330 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45627,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:36303,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] 2023-06-03 08:58:35,330 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:35,330 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 is not closed yet, will try archiving it next time 2023-06-03 08:58:35,330 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:47,425 INFO [Listener at localhost/42641] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-03 08:58:49,427 WARN [Listener at localhost/42641] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:58:49,429 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:36303,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-03 08:58:49,429 WARN [DataStreamer for file /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 block BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45627,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:36303,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:36303,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]) is bad. 2023-06-03 08:58:49,429 WARN [PacketResponder: BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36303]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:49,430 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:43594 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:45627:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43594 dst: /127.0.0.1:45627 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:49,433 INFO [Listener at localhost/42641] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:49,536 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:55904 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:36303:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55904 dst: /127.0.0.1:36303 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:49,538 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:58:49,538 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-182810667-172.31.14.131-1685782700134 (Datanode Uuid 432754c7-bf16-47e8-9c50-fffff622125f) service to localhost/127.0.0.1:35813 2023-06-03 08:58:49,539 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:49,539 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data2/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:49,546 WARN [Listener at localhost/42641] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:49,549 WARN [Listener at localhost/42641] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:49,550 INFO [Listener at localhost/42641] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:49,557 INFO [Listener at localhost/42641] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_37019_datanode____lonmmz/webapp 2023-06-03 08:58:49,650 INFO [Listener at localhost/42641] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37019 2023-06-03 08:58:49,658 WARN [Listener at localhost/35313] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:49,662 WARN [Listener at localhost/35313] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:58:49,662 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:58:49,669 INFO [Listener at localhost/35313] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:58:49,731 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xadcb0a0ffb13998a: Processing first storage report for DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484 from datanode 432754c7-bf16-47e8-9c50-fffff622125f 2023-06-03 08:58:49,731 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xadcb0a0ffb13998a: from storage DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484 node DatanodeRegistration(127.0.0.1:38995, datanodeUuid=432754c7-bf16-47e8-9c50-fffff622125f, infoPort=34005, infoSecurePort=0, ipcPort=35313, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:49,731 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xadcb0a0ffb13998a: Processing first storage report for DS-d54309db-b77d-4c95-acd7-864d3101926c from datanode 432754c7-bf16-47e8-9c50-fffff622125f 2023-06-03 08:58:49,731 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xadcb0a0ffb13998a: from storage DS-d54309db-b77d-4c95-acd7-864d3101926c node DatanodeRegistration(127.0.0.1:38995, datanodeUuid=432754c7-bf16-47e8-9c50-fffff622125f, infoPort=34005, infoSecurePort=0, ipcPort=35313, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:49,773 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_496614492_17 at /127.0.0.1:50978 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:45627:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50978 dst: /127.0.0.1:45627 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:58:49,774 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:58:49,774 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-182810667-172.31.14.131-1685782700134 (Datanode Uuid a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3) service to localhost/127.0.0.1:35813 2023-06-03 08:58:49,775 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data3/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:49,776 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data4/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:58:49,782 WARN [Listener at localhost/35313] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:58:49,784 WARN [Listener at localhost/35313] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:58:49,786 INFO [Listener at localhost/35313] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:58:49,790 INFO [Listener at localhost/35313] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/java.io.tmpdir/Jetty_localhost_45103_datanode____.ip9i9y/webapp 2023-06-03 08:58:49,879 INFO [Listener at localhost/35313] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45103 2023-06-03 08:58:49,886 WARN [Listener at localhost/46441] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:58:49,947 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb0fa989ecbca9515: Processing first storage report for DS-dc8771dc-4912-47c9-bf8a-1dc714a02252 from datanode a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3 2023-06-03 08:58:49,947 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb0fa989ecbca9515: from storage DS-dc8771dc-4912-47c9-bf8a-1dc714a02252 node DatanodeRegistration(127.0.0.1:44159, datanodeUuid=a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3, infoPort=36431, infoSecurePort=0, ipcPort=46441, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-03 08:58:49,948 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb0fa989ecbca9515: Processing first storage report for DS-1005a0e1-3f13-4db6-bbe8-4f025e44de98 from datanode a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3 2023-06-03 08:58:49,948 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb0fa989ecbca9515: from storage DS-1005a0e1-3f13-4db6-bbe8-4f025e44de98 node DatanodeRegistration(127.0.0.1:44159, datanodeUuid=a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3, infoPort=36431, infoSecurePort=0, ipcPort=46441, storageInfo=lv=-57;cid=testClusterID;nsid=1070418625;c=1685782700134), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:58:50,874 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,874 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36435%2C1685782700691:(num 1685782700802) roll requested 2023-06-03 08:58:50,874 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,875 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,881 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-03 08:58:50,881 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691/jenkins-hbase4.apache.org%2C36435%2C1685782700691.1685782700802 with entries=88, filesize=43.79 KB; new WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691/jenkins-hbase4.apache.org%2C36435%2C1685782700691.1685782730874 2023-06-03 08:58:50,881 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38995,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK], DatanodeInfoWithStorage[127.0.0.1:44159,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]] 2023-06-03 08:58:50,881 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691/jenkins-hbase4.apache.org%2C36435%2C1685782700691.1685782700802 is not closed yet, will try archiving it next time 2023-06-03 08:58:50,881 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,882 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691/jenkins-hbase4.apache.org%2C36435%2C1685782700691.1685782700802; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,889 INFO [Listener at localhost/46441] wal.TestLogRolling(498): Data Nodes restarted 2023-06-03 08:58:50,890 INFO [Listener at localhost/46441] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-03 08:58:50,891 WARN [RS:0;jenkins-hbase4:46097.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45627,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,892 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46097%2C1685782700732:(num 1685782715321) roll requested 2023-06-03 08:58:50,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46097] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45627,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46097] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:35166 deadline: 1685782740891, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-03 08:58:50,900 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 newFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 2023-06-03 08:58:50,901 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-03 08:58:50,901 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 2023-06-03 08:58:50,901 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44159,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:38995,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] 2023-06-03 08:58:50,901 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45627,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:58:50,901 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 is not closed yet, will try archiving it next time 2023-06-03 08:58:50,901 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45627,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:02,968 DEBUG [Listener at localhost/46441] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 newFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 2023-06-03 08:59:02,969 INFO [Listener at localhost/46441] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 2023-06-03 08:59:02,974 DEBUG [Listener at localhost/46441] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44159,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:38995,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] 2023-06-03 08:59:02,974 DEBUG [Listener at localhost/46441] wal.AbstractFSWAL(716): hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 is not closed yet, will try archiving it next time 2023-06-03 08:59:02,975 DEBUG [Listener at localhost/46441] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 2023-06-03 08:59:02,976 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 2023-06-03 08:59:02,979 WARN [IPC Server handler 4 on default port 35813] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1014 2023-06-03 08:59:02,981 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 after 5ms 2023-06-03 08:59:03,971 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@546e93f6] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-182810667-172.31.14.131-1685782700134:blk_1073741832_1014, datanode=DatanodeInfoWithStorage[127.0.0.1:44159,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1014, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data4/current/BP-182810667-172.31.14.131-1685782700134/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:59:06,982 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 after 4006ms 2023-06-03 08:59:06,982 DEBUG [Listener at localhost/46441] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782701118 2023-06-03 08:59:06,991 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685782701685/Put/vlen=175/seqid=0] 2023-06-03 08:59:06,991 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #4: [default/info:d/1685782701726/Put/vlen=9/seqid=0] 2023-06-03 08:59:06,991 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #5: [hbase/info:d/1685782701748/Put/vlen=7/seqid=0] 2023-06-03 08:59:06,991 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685782702234/Put/vlen=231/seqid=0] 2023-06-03 08:59:06,992 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #4: [row1002/info:/1685782711882/Put/vlen=1045/seqid=0] 2023-06-03 08:59:06,992 DEBUG [Listener at localhost/46441] wal.ProtobufLogReader(420): EOF at position 2160 2023-06-03 08:59:06,992 DEBUG [Listener at localhost/46441] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 2023-06-03 08:59:06,992 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 2023-06-03 08:59:06,992 WARN [IPC Server handler 3 on default port 35813] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-03 08:59:06,993 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 after 1ms 2023-06-03 08:59:07,952 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@79fa5149] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-182810667-172.31.14.131-1685782700134:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:38995,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current/BP-182810667-172.31.14.131-1685782700134/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current/BP-182810667-172.31.14.131-1685782700134/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-03 08:59:10,993 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 after 4001ms 2023-06-03 08:59:10,993 DEBUG [Listener at localhost/46441] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782715321 2023-06-03 08:59:10,997 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #6: [row1003/info:/1685782725420/Put/vlen=1045/seqid=0] 2023-06-03 08:59:10,997 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #7: [row1004/info:/1685782727426/Put/vlen=1045/seqid=0] 2023-06-03 08:59:10,997 DEBUG [Listener at localhost/46441] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-03 08:59:10,998 DEBUG [Listener at localhost/46441] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 2023-06-03 08:59:10,998 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 2023-06-03 08:59:10,998 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 after 0ms 2023-06-03 08:59:10,998 DEBUG [Listener at localhost/46441] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782730892 2023-06-03 08:59:11,001 DEBUG [Listener at localhost/46441] wal.TestLogRolling(522): #9: [row1005/info:/1685782740953/Put/vlen=1045/seqid=0] 2023-06-03 08:59:11,001 DEBUG [Listener at localhost/46441] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 2023-06-03 08:59:11,001 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 2023-06-03 08:59:11,002 WARN [IPC Server handler 1 on default port 35813] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-03 08:59:11,002 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 after 1ms 2023-06-03 08:59:11,950 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1676028266_17 at /127.0.0.1:47570 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:44159:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47570 dst: /127.0.0.1:44159 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44159 remote=/127.0.0.1:47570]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:59:11,952 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1676028266_17 at /127.0.0.1:45486 [Receiving block BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:38995:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45486 dst: /127.0.0.1:38995 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:59:11,951 WARN [ResponseProcessor for block BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-03 08:59:11,952 WARN [DataStreamer for file /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 block BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44159,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK], DatanodeInfoWithStorage[127.0.0.1:38995,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:44159,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]) is bad. 2023-06-03 08:59:11,957 WARN [DataStreamer for file /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 block BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,003 INFO [Listener at localhost/46441] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 after 4002ms 2023-06-03 08:59:15,003 DEBUG [Listener at localhost/46441] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 2023-06-03 08:59:15,007 DEBUG [Listener at localhost/46441] wal.ProtobufLogReader(420): EOF at position 83 2023-06-03 08:59:15,008 INFO [Listener at localhost/46441] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-06-03 08:59:15,008 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,009 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta:.meta(num 1685782701252) roll requested 2023-06-03 08:59:15,009 DEBUG [Listener at localhost/46441] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-03 08:59:15,009 INFO [Listener at localhost/46441] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,010 INFO [Listener at localhost/46441] regionserver.HRegion(2745): Flushing b8dd7b85738eb523dce6fc9be7367138 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-03 08:59:15,011 WARN [RS:0;jenkins-hbase4:46097.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,012 DEBUG [Listener at localhost/46441] regionserver.HRegion(2446): Flush status journal for b8dd7b85738eb523dce6fc9be7367138: 2023-06-03 08:59:15,012 INFO [Listener at localhost/46441] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,013 INFO [Listener at localhost/46441] regionserver.HRegion(2745): Flushing 4549313c44dac3f8378988dad5edbf05 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-03 08:59:15,013 DEBUG [Listener at localhost/46441] regionserver.HRegion(2446): Flush status journal for 4549313c44dac3f8378988dad5edbf05: 2023-06-03 08:59:15,013 INFO [Listener at localhost/46441] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,016 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-03 08:59:15,016 INFO [Listener at localhost/46441] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-03 08:59:15,016 DEBUG [Listener at localhost/46441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x15908d76 to 127.0.0.1:57782 2023-06-03 08:59:15,016 DEBUG [Listener at localhost/46441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:59:15,016 DEBUG [Listener at localhost/46441] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-03 08:59:15,016 DEBUG [Listener at localhost/46441] util.JVMClusterUtil(257): Found active master hash=580353613, stopped=false 2023-06-03 08:59:15,016 INFO [Listener at localhost/46441] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:59:15,022 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:59:15,022 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 08:59:15,022 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:15,022 INFO [Listener at localhost/46441] procedure2.ProcedureExecutor(629): Stopping 2023-06-03 08:59:15,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:59:15,023 DEBUG [Listener at localhost/46441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29157a56 to 127.0.0.1:57782 2023-06-03 08:59:15,023 DEBUG [Listener at localhost/46441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:59:15,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:59:15,023 INFO [Listener at localhost/46441] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46097,1685782700732' ***** 2023-06-03 08:59:15,024 INFO [Listener at localhost/46441] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 08:59:15,024 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-03 08:59:15,024 INFO [RS:0;jenkins-hbase4:46097] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 08:59:15,024 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta.1685782701252.meta with entries=11, filesize=3.72 KB; new WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta.1685782755009.meta 2023-06-03 08:59:15,024 INFO [RS:0;jenkins-hbase4:46097] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 08:59:15,024 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 08:59:15,024 INFO [RS:0;jenkins-hbase4:46097] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 08:59:15,025 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38995,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK], DatanodeInfoWithStorage[127.0.0.1:44159,DS-dc8771dc-4912-47c9-bf8a-1dc714a02252,DISK]] 2023-06-03 08:59:15,025 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(3303): Received CLOSE for b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:59:15,025 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,025 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta.1685782701252.meta is not closed yet, will try archiving it next time 2023-06-03 08:59:15,025 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46097%2C1685782700732:(num 1685782742956) roll requested 2023-06-03 08:59:15,027 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.meta.1685782701252.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43659,DS-00a3f079-81ce-4aa2-b979-6b2e8c71e484,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b8dd7b85738eb523dce6fc9be7367138, disabling compactions & flushes 2023-06-03 08:59:15,027 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(3303): Received CLOSE for 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:59:15,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,027 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:59:15,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,027 DEBUG [RS:0;jenkins-hbase4:46097] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4c48af43 to 127.0.0.1:57782 2023-06-03 08:59:15,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. after waiting 0 ms 2023-06-03 08:59:15,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b8dd7b85738eb523dce6fc9be7367138 1/1 column families, dataSize=78 B heapSize=728 B 2023-06-03 08:59:15,027 DEBUG [RS:0;jenkins-hbase4:46097] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:59:15,027 INFO [RS:0;jenkins-hbase4:46097] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 08:59:15,027 INFO [RS:0;jenkins-hbase4:46097] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 08:59:15,028 INFO [RS:0;jenkins-hbase4:46097] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 08:59:15,027 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-03 08:59:15,028 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 08:59:15,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b8dd7b85738eb523dce6fc9be7367138: 2023-06-03 08:59:15,028 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-03 08:59:15,029 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,46097,1685782700732: Unrecoverable exception while closing hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:59:15,029 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, b8dd7b85738eb523dce6fc9be7367138=hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138., 4549313c44dac3f8378988dad5edbf05=TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05.} 2023-06-03 08:59:15,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:59:15,029 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(3303): Received CLOSE for b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:59:15,029 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:59:15,029 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1504): Waiting on 1588230740, 4549313c44dac3f8378988dad5edbf05, b8dd7b85738eb523dce6fc9be7367138 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:59:15,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-03 08:59:15,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-03 08:59:15,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-03 08:59:15,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-03 08:59:15,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1037565952, "init": 513802240, "max": 2051014656, "used": 400367360 }, "NonHeapMemoryUsage": { "committed": 139419648, "init": 2555904, "max": -1, "used": 136912816 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-03 08:59:15,031 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(609): jenkins-hbase4.apache.org,46097,1685782700732 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,46097,1685782700732: Unrecoverable exception while closing hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4549313c44dac3f8378988dad5edbf05, disabling compactions & flushes 2023-06-03 08:59:15,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. after waiting 0 ms 2023-06-03 08:59:15,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4549313c44dac3f8378988dad5edbf05: 2023-06-03 08:59:15,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b8dd7b85738eb523dce6fc9be7367138, disabling compactions & flushes 2023-06-03 08:59:15,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. after waiting 0 ms 2023-06-03 08:59:15,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b8dd7b85738eb523dce6fc9be7367138: 2023-06-03 08:59:15,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685782701314.b8dd7b85738eb523dce6fc9be7367138. 2023-06-03 08:59:15,034 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 newFile=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782755025 2023-06-03 08:59:15,035 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-03 08:59:15,035 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782755025 2023-06-03 08:59:15,035 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,035 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956 failed. Cause="Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-03 08:59:15,035 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,035 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732/jenkins-hbase4.apache.org%2C46097%2C1685782700732.1685782742956, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-182810667-172.31.14.131-1685782700134:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-03 08:59:15,036 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:59:15,036 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-03 08:59:15,037 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/WALs/jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:59:15,041 DEBUG [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-06-03 08:59:15,229 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 08:59:15,229 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(3303): Received CLOSE for 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:59:15,229 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:59:15,230 DEBUG [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1504): Waiting on 1588230740, 4549313c44dac3f8378988dad5edbf05 2023-06-03 08:59:15,230 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4549313c44dac3f8378988dad5edbf05, disabling compactions & flushes 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:59:15,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. after waiting 0 ms 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4549313c44dac3f8378988dad5edbf05: 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-03 08:59:15,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685782701868.4549313c44dac3f8378988dad5edbf05. 2023-06-03 08:59:15,430 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-03 08:59:15,430 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46097,1685782700732; all regions closed. 2023-06-03 08:59:15,430 DEBUG [RS:0;jenkins-hbase4:46097] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:59:15,430 INFO [RS:0;jenkins-hbase4:46097] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:59:15,430 INFO [RS:0;jenkins-hbase4:46097] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-03 08:59:15,430 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:59:15,431 INFO [RS:0;jenkins-hbase4:46097] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46097 2023-06-03 08:59:15,435 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46097,1685782700732 2023-06-03 08:59:15,435 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:59:15,435 ERROR [Listener at localhost/41967-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@52db7fc5 rejected from java.util.concurrent.ThreadPoolExecutor@56e4e146[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-06-03 08:59:15,435 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:59:15,436 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46097,1685782700732] 2023-06-03 08:59:15,436 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46097,1685782700732; numProcessing=1 2023-06-03 08:59:15,437 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46097,1685782700732 already deleted, retry=false 2023-06-03 08:59:15,437 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46097,1685782700732 expired; onlineServers=0 2023-06-03 08:59:15,437 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36435,1685782700691' ***** 2023-06-03 08:59:15,437 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-03 08:59:15,438 DEBUG [M:0;jenkins-hbase4:36435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26a8eec7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:59:15,438 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:59:15,438 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36435,1685782700691; all regions closed. 2023-06-03 08:59:15,438 DEBUG [M:0;jenkins-hbase4:36435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 08:59:15,438 DEBUG [M:0;jenkins-hbase4:36435] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-03 08:59:15,438 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-03 08:59:15,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782700878] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782700878,5,FailOnTimeoutGroup] 2023-06-03 08:59:15,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782700878] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782700878,5,FailOnTimeoutGroup] 2023-06-03 08:59:15,438 DEBUG [M:0;jenkins-hbase4:36435] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-03 08:59:15,439 INFO [M:0;jenkins-hbase4:36435] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-03 08:59:15,439 INFO [M:0;jenkins-hbase4:36435] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-03 08:59:15,439 INFO [M:0;jenkins-hbase4:36435] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-03 08:59:15,440 DEBUG [M:0;jenkins-hbase4:36435] master.HMaster(1512): Stopping service threads 2023-06-03 08:59:15,440 INFO [M:0;jenkins-hbase4:36435] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-03 08:59:15,440 ERROR [M:0;jenkins-hbase4:36435] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-03 08:59:15,440 INFO [M:0;jenkins-hbase4:36435] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-03 08:59:15,440 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-03 08:59:15,441 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-03 08:59:15,441 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:15,441 DEBUG [M:0;jenkins-hbase4:36435] zookeeper.ZKUtil(398): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-03 08:59:15,441 WARN [M:0;jenkins-hbase4:36435] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-03 08:59:15,441 INFO [M:0;jenkins-hbase4:36435] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-03 08:59:15,441 INFO [M:0;jenkins-hbase4:36435] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-03 08:59:15,441 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:59:15,442 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:59:15,442 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:15,442 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:15,442 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:59:15,442 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:15,442 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.16 KB heapSize=45.78 KB 2023-06-03 08:59:15,453 INFO [M:0;jenkins-hbase4:36435] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.16 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e866fe9cb6cb461eb396392df0ac1747 2023-06-03 08:59:15,458 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e866fe9cb6cb461eb396392df0ac1747 as hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e866fe9cb6cb461eb396392df0ac1747 2023-06-03 08:59:15,463 INFO [M:0;jenkins-hbase4:36435] regionserver.HStore(1080): Added hdfs://localhost:35813/user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e866fe9cb6cb461eb396392df0ac1747, entries=11, sequenceid=92, filesize=7.0 K 2023-06-03 08:59:15,463 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(2948): Finished flush of dataSize ~38.16 KB/39075, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=92, compaction requested=false 2023-06-03 08:59:15,465 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:15,465 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:59:15,465 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/95872feb-3376-652b-4e5b-a0d8fb93ff62/MasterData/WALs/jenkins-hbase4.apache.org,36435,1685782700691 2023-06-03 08:59:15,468 INFO [M:0;jenkins-hbase4:36435] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-03 08:59:15,468 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 08:59:15,468 INFO [M:0;jenkins-hbase4:36435] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36435 2023-06-03 08:59:15,471 DEBUG [M:0;jenkins-hbase4:36435] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36435,1685782700691 already deleted, retry=false 2023-06-03 08:59:15,536 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:59:15,536 INFO [RS:0;jenkins-hbase4:46097] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46097,1685782700732; zookeeper connection closed. 2023-06-03 08:59:15,536 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): regionserver:46097-0x1008fe925840001, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:59:15,537 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4ca65159] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4ca65159 2023-06-03 08:59:15,540 INFO [Listener at localhost/46441] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-03 08:59:15,636 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:59:15,636 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36435,1685782700691; zookeeper connection closed. 2023-06-03 08:59:15,637 DEBUG [Listener at localhost/41967-EventThread] zookeeper.ZKWatcher(600): master:36435-0x1008fe925840000, quorum=127.0.0.1:57782, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 08:59:15,637 WARN [Listener at localhost/46441] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:59:15,641 INFO [Listener at localhost/46441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:59:15,745 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:59:15,745 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-182810667-172.31.14.131-1685782700134 (Datanode Uuid a4db4e8f-4609-4f69-beb8-ea8dcd6c66d3) service to localhost/127.0.0.1:35813 2023-06-03 08:59:15,746 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data3/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:59:15,746 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data4/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:59:15,748 WARN [Listener at localhost/46441] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 08:59:15,751 INFO [Listener at localhost/46441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:59:15,855 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 08:59:15,855 WARN [BP-182810667-172.31.14.131-1685782700134 heartbeating to localhost/127.0.0.1:35813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-182810667-172.31.14.131-1685782700134 (Datanode Uuid 432754c7-bf16-47e8-9c50-fffff622125f) service to localhost/127.0.0.1:35813 2023-06-03 08:59:15,855 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data1/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:59:15,856 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/cluster_8d1d8d09-e9dd-1709-3d53-c4f18f81fe60/dfs/data/data2/current/BP-182810667-172.31.14.131-1685782700134] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 08:59:15,866 INFO [Listener at localhost/46441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 08:59:15,978 INFO [Listener at localhost/46441] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-03 08:59:15,990 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-03 08:59:16,000 INFO [Listener at localhost/46441] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=86 (was 75) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:35813 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46441 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:35813 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35813 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:35813 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1524112806) connection to localhost/127.0.0.1:35813 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=465 (was 459) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=37 (was 76), ProcessCount=169 (was 169), AvailableMemoryMB=1209 (was 1550) 2023-06-03 08:59:16,007 INFO [Listener at localhost/46441] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=86, OpenFileDescriptor=465, MaxFileDescriptor=60000, SystemLoadAverage=37, ProcessCount=169, AvailableMemoryMB=1209 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/hadoop.log.dir so I do NOT create it in target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2db02e35-b17b-193d-a3d4-2e2c2f9b9fbf/hadoop.tmp.dir so I do NOT create it in target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4, deleteOnExit=true 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/test.cache.data in system properties and HBase conf 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/hadoop.tmp.dir in system properties and HBase conf 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/hadoop.log.dir in system properties and HBase conf 2023-06-03 08:59:16,008 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-03 08:59:16,009 DEBUG [Listener at localhost/46441] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:59:16,009 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/nfs.dump.dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/java.io.tmpdir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-03 08:59:16,010 INFO [Listener at localhost/46441] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-03 08:59:16,012 WARN [Listener at localhost/46441] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:59:16,015 WARN [Listener at localhost/46441] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:59:16,015 WARN [Listener at localhost/46441] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:59:16,054 WARN [Listener at localhost/46441] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:59:16,055 INFO [Listener at localhost/46441] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:59:16,062 INFO [Listener at localhost/46441] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/java.io.tmpdir/Jetty_localhost_46613_hdfs____.s2pfk/webapp 2023-06-03 08:59:16,152 INFO [Listener at localhost/46441] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46613 2023-06-03 08:59:16,153 WARN [Listener at localhost/46441] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 08:59:16,156 WARN [Listener at localhost/46441] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 08:59:16,156 WARN [Listener at localhost/46441] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 08:59:16,197 WARN [Listener at localhost/37159] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:59:16,207 WARN [Listener at localhost/37159] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:59:16,209 WARN [Listener at localhost/37159] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:59:16,210 INFO [Listener at localhost/37159] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:59:16,215 INFO [Listener at localhost/37159] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/java.io.tmpdir/Jetty_localhost_37913_datanode____.8xo482/webapp 2023-06-03 08:59:16,305 INFO [Listener at localhost/37159] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37913 2023-06-03 08:59:16,311 WARN [Listener at localhost/46285] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:59:16,323 WARN [Listener at localhost/46285] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 08:59:16,325 WARN [Listener at localhost/46285] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 08:59:16,326 INFO [Listener at localhost/46285] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 08:59:16,329 INFO [Listener at localhost/46285] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/java.io.tmpdir/Jetty_localhost_46159_datanode____.r177wm/webapp 2023-06-03 08:59:16,406 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x386ce576d0fb1542: Processing first storage report for DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc from datanode 91162933-e246-4235-932b-f99164063272 2023-06-03 08:59:16,406 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x386ce576d0fb1542: from storage DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc node DatanodeRegistration(127.0.0.1:37273, datanodeUuid=91162933-e246-4235-932b-f99164063272, infoPort=45549, infoSecurePort=0, ipcPort=46285, storageInfo=lv=-57;cid=testClusterID;nsid=1912422329;c=1685782756017), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:59:16,406 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x386ce576d0fb1542: Processing first storage report for DS-d5c4a7a7-7ad7-48e1-9572-0d3eb97365e1 from datanode 91162933-e246-4235-932b-f99164063272 2023-06-03 08:59:16,406 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x386ce576d0fb1542: from storage DS-d5c4a7a7-7ad7-48e1-9572-0d3eb97365e1 node DatanodeRegistration(127.0.0.1:37273, datanodeUuid=91162933-e246-4235-932b-f99164063272, infoPort=45549, infoSecurePort=0, ipcPort=46285, storageInfo=lv=-57;cid=testClusterID;nsid=1912422329;c=1685782756017), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:59:16,432 INFO [Listener at localhost/46285] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46159 2023-06-03 08:59:16,442 WARN [Listener at localhost/36119] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 08:59:16,521 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x552e744a75d9b7cb: Processing first storage report for DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f from datanode 0dc83e59-f68c-4609-bc44-ad1197386834 2023-06-03 08:59:16,521 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x552e744a75d9b7cb: from storage DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f node DatanodeRegistration(127.0.0.1:39333, datanodeUuid=0dc83e59-f68c-4609-bc44-ad1197386834, infoPort=43653, infoSecurePort=0, ipcPort=36119, storageInfo=lv=-57;cid=testClusterID;nsid=1912422329;c=1685782756017), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:59:16,521 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x552e744a75d9b7cb: Processing first storage report for DS-a227f81b-2370-457e-8e79-2f4aa793f482 from datanode 0dc83e59-f68c-4609-bc44-ad1197386834 2023-06-03 08:59:16,521 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x552e744a75d9b7cb: from storage DS-a227f81b-2370-457e-8e79-2f4aa793f482 node DatanodeRegistration(127.0.0.1:39333, datanodeUuid=0dc83e59-f68c-4609-bc44-ad1197386834, infoPort=43653, infoSecurePort=0, ipcPort=36119, storageInfo=lv=-57;cid=testClusterID;nsid=1912422329;c=1685782756017), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 08:59:16,548 DEBUG [Listener at localhost/36119] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4 2023-06-03 08:59:16,550 INFO [Listener at localhost/36119] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/zookeeper_0, clientPort=54897, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-03 08:59:16,551 INFO [Listener at localhost/36119] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54897 2023-06-03 08:59:16,551 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,553 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,565 INFO [Listener at localhost/36119] util.FSUtils(471): Created version file at hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c with version=8 2023-06-03 08:59:16,565 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase-staging 2023-06-03 08:59:16,567 INFO [Listener at localhost/36119] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:59:16,567 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:59:16,567 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:59:16,567 INFO [Listener at localhost/36119] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:59:16,567 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:59:16,568 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:59:16,568 INFO [Listener at localhost/36119] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:59:16,569 INFO [Listener at localhost/36119] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38191 2023-06-03 08:59:16,569 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,570 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,571 INFO [Listener at localhost/36119] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38191 connecting to ZooKeeper ensemble=127.0.0.1:54897 2023-06-03 08:59:16,577 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:381910x0, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:59:16,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38191-0x1008fe9ffc90000 connected 2023-06-03 08:59:16,591 DEBUG [Listener at localhost/36119] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:59:16,591 DEBUG [Listener at localhost/36119] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:59:16,592 DEBUG [Listener at localhost/36119] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:59:16,592 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38191 2023-06-03 08:59:16,592 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38191 2023-06-03 08:59:16,593 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38191 2023-06-03 08:59:16,593 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38191 2023-06-03 08:59:16,594 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38191 2023-06-03 08:59:16,594 INFO [Listener at localhost/36119] master.HMaster(444): hbase.rootdir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c, hbase.cluster.distributed=false 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 08:59:16,607 INFO [Listener at localhost/36119] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 08:59:16,609 INFO [Listener at localhost/36119] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38381 2023-06-03 08:59:16,609 INFO [Listener at localhost/36119] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 08:59:16,610 DEBUG [Listener at localhost/36119] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 08:59:16,610 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,611 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,612 INFO [Listener at localhost/36119] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38381 connecting to ZooKeeper ensemble=127.0.0.1:54897 2023-06-03 08:59:16,616 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:383810x0, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 08:59:16,617 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38381-0x1008fe9ffc90001 connected 2023-06-03 08:59:16,617 DEBUG [Listener at localhost/36119] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 08:59:16,617 DEBUG [Listener at localhost/36119] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 08:59:16,618 DEBUG [Listener at localhost/36119] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 08:59:16,618 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38381 2023-06-03 08:59:16,619 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38381 2023-06-03 08:59:16,619 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38381 2023-06-03 08:59:16,619 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38381 2023-06-03 08:59:16,619 DEBUG [Listener at localhost/36119] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38381 2023-06-03 08:59:16,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,622 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:59:16,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,625 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:59:16,625 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 08:59:16,625 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:59:16,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38191,1685782756566 from backup master directory 2023-06-03 08:59:16,626 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 08:59:16,627 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,628 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:59:16,628 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 08:59:16,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/hbase.id with ID: 7d2b672d-77ed-461b-8a49-b023b4033f6d 2023-06-03 08:59:16,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:16,654 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x49b03962 to 127.0.0.1:54897 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:59:16,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4fdc7329, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:59:16,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:59:16,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-03 08:59:16,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:59:16,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store-tmp 2023-06-03 08:59:16,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 08:59:16,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 08:59:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:16,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 08:59:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:59:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/WALs/jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38191%2C1685782756566, suffix=, logDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/WALs/jenkins-hbase4.apache.org,38191,1685782756566, archiveDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/oldWALs, maxLogs=10 2023-06-03 08:59:16,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/WALs/jenkins-hbase4.apache.org,38191,1685782756566/jenkins-hbase4.apache.org%2C38191%2C1685782756566.1685782756678 2023-06-03 08:59:16,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39333,DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f,DISK], DatanodeInfoWithStorage[127.0.0.1:37273,DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc,DISK]] 2023-06-03 08:59:16,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:59:16,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:16,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:59:16,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:59:16,687 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:59:16,688 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-03 08:59:16,688 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-03 08:59:16,689 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:16,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:59:16,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:59:16,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 08:59:16,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:59:16,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=832380, jitterRate=0.058426499366760254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:59:16,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 08:59:16,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-03 08:59:16,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-03 08:59:16,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-03 08:59:16,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-03 08:59:16,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-03 08:59:16,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-03 08:59:16,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-03 08:59:16,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-03 08:59:16,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-03 08:59:16,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-03 08:59:16,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-03 08:59:16,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-03 08:59:16,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-03 08:59:16,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-03 08:59:16,713 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-03 08:59:16,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-03 08:59:16,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-03 08:59:16,716 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:59:16,716 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 08:59:16,716 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38191,1685782756566, sessionid=0x1008fe9ffc90000, setting cluster-up flag (Was=false) 2023-06-03 08:59:16,720 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-03 08:59:16,726 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,730 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-03 08:59:16,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:16,735 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.hbase-snapshot/.tmp 2023-06-03 08:59:16,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-03 08:59:16,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:59:16,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:59:16,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:59:16,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 08:59:16,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-03 08:59:16,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:59:16,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685782786740 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-03 08:59:16,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,741 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:59:16,741 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-03 08:59:16,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-03 08:59:16,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-03 08:59:16,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-03 08:59:16,742 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:59:16,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-03 08:59:16,743 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-03 08:59:16,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782756743,5,FailOnTimeoutGroup] 2023-06-03 08:59:16,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782756748,5,FailOnTimeoutGroup] 2023-06-03 08:59:16,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-03 08:59:16,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,756 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:59:16,756 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 08:59:16,756 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c 2023-06-03 08:59:16,764 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:16,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:59:16,766 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/info 2023-06-03 08:59:16,766 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:59:16,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:16,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:59:16,768 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:59:16,768 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:59:16,769 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:16,769 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:59:16,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/table 2023-06-03 08:59:16,770 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:59:16,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:16,772 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740 2023-06-03 08:59:16,772 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740 2023-06-03 08:59:16,774 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:59:16,775 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:59:16,776 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:59:16,777 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=721905, jitterRate=-0.08205083012580872}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:59:16,777 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:59:16,777 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 08:59:16,777 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 08:59:16,777 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 08:59:16,777 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 08:59:16,777 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 08:59:16,777 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 08:59:16,777 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 08:59:16,778 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 08:59:16,778 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-03 08:59:16,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-03 08:59:16,780 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-03 08:59:16,781 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-03 08:59:16,821 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(951): ClusterId : 7d2b672d-77ed-461b-8a49-b023b4033f6d 2023-06-03 08:59:16,822 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 08:59:16,825 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 08:59:16,825 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 08:59:16,827 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 08:59:16,828 DEBUG [RS:0;jenkins-hbase4:38381] zookeeper.ReadOnlyZKClient(139): Connect 0x59a7a632 to 127.0.0.1:54897 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:59:16,831 DEBUG [RS:0;jenkins-hbase4:38381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57605376, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:59:16,831 DEBUG [RS:0;jenkins-hbase4:38381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@535fcd42, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 08:59:16,841 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38381 2023-06-03 08:59:16,841 INFO [RS:0;jenkins-hbase4:38381] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 08:59:16,841 INFO [RS:0;jenkins-hbase4:38381] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 08:59:16,841 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 08:59:16,842 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,38191,1685782756566 with isa=jenkins-hbase4.apache.org/172.31.14.131:38381, startcode=1685782756606 2023-06-03 08:59:16,842 DEBUG [RS:0;jenkins-hbase4:38381] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 08:59:16,845 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60785, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 08:59:16,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:16,846 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c 2023-06-03 08:59:16,846 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37159 2023-06-03 08:59:16,846 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 08:59:16,847 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 08:59:16,848 DEBUG [RS:0;jenkins-hbase4:38381] zookeeper.ZKUtil(162): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:16,848 WARN [RS:0;jenkins-hbase4:38381] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 08:59:16,848 INFO [RS:0;jenkins-hbase4:38381] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:59:16,848 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1946): logDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:16,848 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38381,1685782756606] 2023-06-03 08:59:16,852 DEBUG [RS:0;jenkins-hbase4:38381] zookeeper.ZKUtil(162): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:16,853 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 08:59:16,853 INFO [RS:0;jenkins-hbase4:38381] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 08:59:16,854 INFO [RS:0;jenkins-hbase4:38381] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 08:59:16,854 INFO [RS:0;jenkins-hbase4:38381] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 08:59:16,854 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,855 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 08:59:16,856 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,856 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,857 DEBUG [RS:0;jenkins-hbase4:38381] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 08:59:16,858 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,858 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,858 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,869 INFO [RS:0;jenkins-hbase4:38381] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 08:59:16,869 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38381,1685782756606-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:16,885 INFO [RS:0;jenkins-hbase4:38381] regionserver.Replication(203): jenkins-hbase4.apache.org,38381,1685782756606 started 2023-06-03 08:59:16,885 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38381,1685782756606, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38381, sessionid=0x1008fe9ffc90001 2023-06-03 08:59:16,885 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 08:59:16,885 DEBUG [RS:0;jenkins-hbase4:38381] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:16,885 DEBUG [RS:0;jenkins-hbase4:38381] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38381,1685782756606' 2023-06-03 08:59:16,885 DEBUG [RS:0;jenkins-hbase4:38381] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:16,885 DEBUG [RS:0;jenkins-hbase4:38381] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:16,886 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 08:59:16,886 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 08:59:16,886 DEBUG [RS:0;jenkins-hbase4:38381] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:16,886 DEBUG [RS:0;jenkins-hbase4:38381] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38381,1685782756606' 2023-06-03 08:59:16,886 DEBUG [RS:0;jenkins-hbase4:38381] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 08:59:16,886 DEBUG [RS:0;jenkins-hbase4:38381] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 08:59:16,887 DEBUG [RS:0;jenkins-hbase4:38381] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 08:59:16,887 INFO [RS:0;jenkins-hbase4:38381] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 08:59:16,887 INFO [RS:0;jenkins-hbase4:38381] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 08:59:16,932 DEBUG [jenkins-hbase4:38191] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-03 08:59:16,932 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38381,1685782756606, state=OPENING 2023-06-03 08:59:16,935 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-03 08:59:16,936 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:16,936 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38381,1685782756606}] 2023-06-03 08:59:16,936 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:59:16,989 INFO [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38381%2C1685782756606, suffix=, logDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606, archiveDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/oldWALs, maxLogs=32 2023-06-03 08:59:16,992 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-03 08:59:16,997 INFO [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782756990 2023-06-03 08:59:16,997 DEBUG [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37273,DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc,DISK], DatanodeInfoWithStorage[127.0.0.1:39333,DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f,DISK]] 2023-06-03 08:59:17,090 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:17,090 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-03 08:59:17,092 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33662, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-03 08:59:17,096 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-03 08:59:17,096 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 08:59:17,098 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38381%2C1685782756606.meta, suffix=.meta, logDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606, archiveDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/oldWALs, maxLogs=32 2023-06-03 08:59:17,105 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.meta.1685782757098.meta 2023-06-03 08:59:17,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37273,DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc,DISK], DatanodeInfoWithStorage[127.0.0.1:39333,DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f,DISK]] 2023-06-03 08:59:17,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:59:17,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-03 08:59:17,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-03 08:59:17,106 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-03 08:59:17,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-03 08:59:17,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:17,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-03 08:59:17,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-03 08:59:17,107 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 08:59:17,108 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/info 2023-06-03 08:59:17,108 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/info 2023-06-03 08:59:17,109 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 08:59:17,109 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:17,109 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 08:59:17,110 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:59:17,110 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/rep_barrier 2023-06-03 08:59:17,110 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 08:59:17,111 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:17,111 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 08:59:17,112 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/table 2023-06-03 08:59:17,112 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/table 2023-06-03 08:59:17,112 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 08:59:17,113 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:17,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740 2023-06-03 08:59:17,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740 2023-06-03 08:59:17,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 08:59:17,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 08:59:17,118 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=783267, jitterRate=-0.004024773836135864}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 08:59:17,118 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 08:59:17,120 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685782757090 2023-06-03 08:59:17,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-03 08:59:17,124 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-03 08:59:17,125 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38381,1685782756606, state=OPEN 2023-06-03 08:59:17,127 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-03 08:59:17,127 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 08:59:17,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-03 08:59:17,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38381,1685782756606 in 191 msec 2023-06-03 08:59:17,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-03 08:59:17,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 351 msec 2023-06-03 08:59:17,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 396 msec 2023-06-03 08:59:17,133 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685782757133, completionTime=-1 2023-06-03 08:59:17,133 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-03 08:59:17,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-03 08:59:17,136 DEBUG [hconnection-0x6107912a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:59:17,139 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33664, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:59:17,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-03 08:59:17,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685782817141 2023-06-03 08:59:17,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685782877141 2023-06-03 08:59:17,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38191,1685782756566-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38191,1685782756566-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38191,1685782756566-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38191, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-03 08:59:17,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 08:59:17,149 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-03 08:59:17,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-03 08:59:17,151 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:59:17,151 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:59:17,154 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,154 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830 empty. 2023-06-03 08:59:17,155 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,155 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-03 08:59:17,168 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-03 08:59:17,169 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ceb2680ab57bf9060ac6ed353634830, NAME => 'hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp 2023-06-03 08:59:17,182 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:17,182 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0ceb2680ab57bf9060ac6ed353634830, disabling compactions & flushes 2023-06-03 08:59:17,182 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,182 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. after waiting 0 ms 2023-06-03 08:59:17,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,183 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0ceb2680ab57bf9060ac6ed353634830: 2023-06-03 08:59:17,185 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:59:17,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782757186"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782757186"}]},"ts":"1685782757186"} 2023-06-03 08:59:17,189 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:59:17,190 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:59:17,190 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782757190"}]},"ts":"1685782757190"} 2023-06-03 08:59:17,192 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-03 08:59:17,201 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0ceb2680ab57bf9060ac6ed353634830, ASSIGN}] 2023-06-03 08:59:17,203 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0ceb2680ab57bf9060ac6ed353634830, ASSIGN 2023-06-03 08:59:17,204 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0ceb2680ab57bf9060ac6ed353634830, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38381,1685782756606; forceNewPlan=false, retain=false 2023-06-03 08:59:17,355 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0ceb2680ab57bf9060ac6ed353634830, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:17,355 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782757355"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782757355"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782757355"}]},"ts":"1685782757355"} 2023-06-03 08:59:17,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 0ceb2680ab57bf9060ac6ed353634830, server=jenkins-hbase4.apache.org,38381,1685782756606}] 2023-06-03 08:59:17,514 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ceb2680ab57bf9060ac6ed353634830, NAME => 'hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:59:17,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:17,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,516 INFO [StoreOpener-0ceb2680ab57bf9060ac6ed353634830-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,517 DEBUG [StoreOpener-0ceb2680ab57bf9060ac6ed353634830-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/info 2023-06-03 08:59:17,517 DEBUG [StoreOpener-0ceb2680ab57bf9060ac6ed353634830-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/info 2023-06-03 08:59:17,517 INFO [StoreOpener-0ceb2680ab57bf9060ac6ed353634830-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ceb2680ab57bf9060ac6ed353634830 columnFamilyName info 2023-06-03 08:59:17,518 INFO [StoreOpener-0ceb2680ab57bf9060ac6ed353634830-1] regionserver.HStore(310): Store=0ceb2680ab57bf9060ac6ed353634830/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:17,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0ceb2680ab57bf9060ac6ed353634830 2023-06-03 08:59:17,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:59:17,526 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0ceb2680ab57bf9060ac6ed353634830; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=712680, jitterRate=-0.09378106892108917}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:59:17,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0ceb2680ab57bf9060ac6ed353634830: 2023-06-03 08:59:17,528 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830., pid=6, masterSystemTime=1685782757510 2023-06-03 08:59:17,530 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,530 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:17,531 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0ceb2680ab57bf9060ac6ed353634830, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:17,531 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782757531"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782757531"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782757531"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782757531"}]},"ts":"1685782757531"} 2023-06-03 08:59:17,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-03 08:59:17,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 0ceb2680ab57bf9060ac6ed353634830, server=jenkins-hbase4.apache.org,38381,1685782756606 in 176 msec 2023-06-03 08:59:17,538 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-03 08:59:17,538 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0ceb2680ab57bf9060ac6ed353634830, ASSIGN in 335 msec 2023-06-03 08:59:17,539 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:59:17,539 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782757539"}]},"ts":"1685782757539"} 2023-06-03 08:59:17,541 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-03 08:59:17,544 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:59:17,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 397 msec 2023-06-03 08:59:17,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-03 08:59:17,551 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:59:17,551 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:17,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-03 08:59:17,562 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:59:17,565 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-06-03 08:59:17,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-03 08:59:17,585 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 08:59:17,588 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-03 08:59:17,600 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-03 08:59:17,603 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-03 08:59:17,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.975sec 2023-06-03 08:59:17,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-03 08:59:17,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-03 08:59:17,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-03 08:59:17,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38191,1685782756566-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-03 08:59:17,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38191,1685782756566-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-03 08:59:17,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-03 08:59:17,622 DEBUG [Listener at localhost/36119] zookeeper.ReadOnlyZKClient(139): Connect 0x59561662 to 127.0.0.1:54897 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 08:59:17,626 DEBUG [Listener at localhost/36119] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@86a843, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 08:59:17,627 DEBUG [hconnection-0x40023202-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 08:59:17,629 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33676, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 08:59:17,631 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 08:59:17,631 INFO [Listener at localhost/36119] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 08:59:17,635 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-03 08:59:17,635 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 08:59:17,636 INFO [Listener at localhost/36119] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-03 08:59:17,637 DEBUG [Listener at localhost/36119] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-03 08:59:17,640 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53574, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-03 08:59:17,641 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-03 08:59:17,641 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-03 08:59:17,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 08:59:17,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:17,646 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 08:59:17,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-03 08:59:17,647 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 08:59:17,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:59:17,649 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,649 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c empty. 2023-06-03 08:59:17,650 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,650 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-03 08:59:17,660 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-03 08:59:17,661 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4ea52727b7bcc5aea73a03bcc34c035c, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/.tmp 2023-06-03 08:59:17,669 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:17,669 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 4ea52727b7bcc5aea73a03bcc34c035c, disabling compactions & flushes 2023-06-03 08:59:17,669 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:17,669 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:17,669 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. after waiting 0 ms 2023-06-03 08:59:17,669 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:17,669 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:17,669 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 08:59:17,671 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 08:59:17,672 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685782757672"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782757672"}]},"ts":"1685782757672"} 2023-06-03 08:59:17,674 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 08:59:17,675 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 08:59:17,675 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782757675"}]},"ts":"1685782757675"} 2023-06-03 08:59:17,676 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-03 08:59:17,680 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4ea52727b7bcc5aea73a03bcc34c035c, ASSIGN}] 2023-06-03 08:59:17,682 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4ea52727b7bcc5aea73a03bcc34c035c, ASSIGN 2023-06-03 08:59:17,682 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4ea52727b7bcc5aea73a03bcc34c035c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38381,1685782756606; forceNewPlan=false, retain=false 2023-06-03 08:59:17,834 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4ea52727b7bcc5aea73a03bcc34c035c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:17,834 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685782757833"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782757833"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782757833"}]},"ts":"1685782757833"} 2023-06-03 08:59:17,836 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 4ea52727b7bcc5aea73a03bcc34c035c, server=jenkins-hbase4.apache.org,38381,1685782756606}] 2023-06-03 08:59:17,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:17,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4ea52727b7bcc5aea73a03bcc34c035c, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.', STARTKEY => '', ENDKEY => ''} 2023-06-03 08:59:17,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 08:59:17,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,994 INFO [StoreOpener-4ea52727b7bcc5aea73a03bcc34c035c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,995 DEBUG [StoreOpener-4ea52727b7bcc5aea73a03bcc34c035c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info 2023-06-03 08:59:17,995 DEBUG [StoreOpener-4ea52727b7bcc5aea73a03bcc34c035c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info 2023-06-03 08:59:17,996 INFO [StoreOpener-4ea52727b7bcc5aea73a03bcc34c035c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4ea52727b7bcc5aea73a03bcc34c035c columnFamilyName info 2023-06-03 08:59:17,996 INFO [StoreOpener-4ea52727b7bcc5aea73a03bcc34c035c-1] regionserver.HStore(310): Store=4ea52727b7bcc5aea73a03bcc34c035c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 08:59:17,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:17,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:18,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 08:59:18,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 08:59:18,003 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4ea52727b7bcc5aea73a03bcc34c035c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=873546, jitterRate=0.11077232658863068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 08:59:18,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 08:59:18,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c., pid=11, masterSystemTime=1685782757988 2023-06-03 08:59:18,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:18,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:18,006 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4ea52727b7bcc5aea73a03bcc34c035c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:18,007 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685782758006"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782758006"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782758006"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782758006"}]},"ts":"1685782758006"} 2023-06-03 08:59:18,011 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-03 08:59:18,011 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 4ea52727b7bcc5aea73a03bcc34c035c, server=jenkins-hbase4.apache.org,38381,1685782756606 in 172 msec 2023-06-03 08:59:18,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-03 08:59:18,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4ea52727b7bcc5aea73a03bcc34c035c, ASSIGN in 331 msec 2023-06-03 08:59:18,014 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 08:59:18,014 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782758014"}]},"ts":"1685782758014"} 2023-06-03 08:59:18,016 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-03 08:59:18,019 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 08:59:18,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 376 msec 2023-06-03 08:59:22,662 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-03 08:59:22,853 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:27,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 08:59:27,649 INFO [Listener at localhost/36119] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-03 08:59:27,651 DEBUG [Listener at localhost/36119] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:27,651 DEBUG [Listener at localhost/36119] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:27,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-03 08:59:27,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-03 08:59:27,670 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-03 08:59:27,671 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:27,671 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-03 08:59:27,671 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-03 08:59:27,672 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,672 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-03 08:59:27,673 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:27,673 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,673 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:27,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:27,674 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,674 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-03 08:59:27,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-03 08:59:27,674 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-03 08:59:27,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-03 08:59:27,675 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-03 08:59:27,677 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-03 08:59:27,677 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-03 08:59:27,677 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:27,678 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-03 08:59:27,678 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-03 08:59:27,678 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-03 08:59:27,678 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:27,679 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. started... 2023-06-03 08:59:27,679 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 0ceb2680ab57bf9060ac6ed353634830 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-03 08:59:27,689 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/.tmp/info/1a4901840d474c899dbcf102c4e6c3e6 2023-06-03 08:59:27,698 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/.tmp/info/1a4901840d474c899dbcf102c4e6c3e6 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/info/1a4901840d474c899dbcf102c4e6c3e6 2023-06-03 08:59:27,703 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/info/1a4901840d474c899dbcf102c4e6c3e6, entries=2, sequenceid=6, filesize=4.8 K 2023-06-03 08:59:27,704 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 0ceb2680ab57bf9060ac6ed353634830 in 25ms, sequenceid=6, compaction requested=false 2023-06-03 08:59:27,705 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 0ceb2680ab57bf9060ac6ed353634830: 2023-06-03 08:59:27,705 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 08:59:27,705 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-03 08:59:27,705 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-03 08:59:27,705 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,705 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-03 08:59:27,705 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-03 08:59:27,707 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,707 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,707 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,707 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:27,707 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:27,707 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,707 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-03 08:59:27,707 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:27,708 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:27,708 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-03 08:59:27,708 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,708 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:27,709 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-03 08:59:27,709 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-03 08:59:27,709 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@68544670[Count = 0] remaining members to acquire global barrier 2023-06-03 08:59:27,709 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,711 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,711 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,711 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-03 08:59:27,711 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-03 08:59:27,711 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,38381,1685782756606' in zk 2023-06-03 08:59:27,711 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,711 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-03 08:59:27,713 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-03 08:59:27,713 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,713 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:27,713 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,713 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:27,713 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:27,713 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-03 08:59:27,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:27,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:27,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-03 08:59:27,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:27,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-03 08:59:27,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,38381,1685782756606': 2023-06-03 08:59:27,716 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-03 08:59:27,716 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-03 08:59:27,716 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-03 08:59:27,716 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-03 08:59:27,716 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-03 08:59:27,716 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-03 08:59:27,718 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,718 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,718 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,719 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:27,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,719 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:27,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:27,719 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:27,719 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:27,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:27,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-03 08:59:27,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:27,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-03 08:59:27,720 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,721 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,721 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:27,721 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-03 08:59:27,722 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,727 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,727 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:27,727 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-03 08:59:27,727 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:27,727 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:27,727 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-03 08:59:27,727 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:27,727 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:27,727 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:27,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-03 08:59:27,728 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,728 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:27,728 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-03 08:59:27,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-03 08:59:27,729 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-03 08:59:27,728 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:27,731 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-03 08:59:27,731 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-03 08:59:37,731 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-03 08:59:37,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-03 08:59:37,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-03 08:59:37,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,748 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:37,748 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:37,749 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-03 08:59:37,749 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-03 08:59:37,749 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,749 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,751 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,751 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:37,751 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:37,751 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:37,751 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,751 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-03 08:59:37,751 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,752 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-03 08:59:37,752 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,752 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,752 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,752 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-03 08:59:37,752 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:37,753 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-03 08:59:37,753 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-03 08:59:37,753 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-03 08:59:37,753 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:37,753 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. started... 2023-06-03 08:59:37,754 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4ea52727b7bcc5aea73a03bcc34c035c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-03 08:59:37,768 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/74cd5687828a451cbb280cdaa433ebd4 2023-06-03 08:59:37,776 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/74cd5687828a451cbb280cdaa433ebd4 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/74cd5687828a451cbb280cdaa433ebd4 2023-06-03 08:59:37,781 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/74cd5687828a451cbb280cdaa433ebd4, entries=1, sequenceid=5, filesize=5.8 K 2023-06-03 08:59:37,782 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4ea52727b7bcc5aea73a03bcc34c035c in 28ms, sequenceid=5, compaction requested=false 2023-06-03 08:59:37,783 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 08:59:37,783 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:37,783 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-03 08:59:37,783 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-03 08:59:37,783 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,783 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-03 08:59:37,783 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-03 08:59:37,785 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,785 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:37,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:37,785 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,785 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-03 08:59:37,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:37,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:37,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:37,787 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-03 08:59:37,787 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6c6bdfa2[Count = 0] remaining members to acquire global barrier 2023-06-03 08:59:37,787 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-03 08:59:37,787 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,789 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,789 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,789 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-03 08:59:37,789 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,789 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-03 08:59:37,789 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-03 08:59:37,789 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38381,1685782756606' in zk 2023-06-03 08:59:37,792 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,792 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-03 08:59:37,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:37,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:37,792 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:37,792 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-03 08:59:37,793 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:37,793 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:37,793 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,793 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,794 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:37,794 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,794 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,795 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38381,1685782756606': 2023-06-03 08:59:37,795 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-03 08:59:37,795 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-03 08:59:37,795 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-03 08:59:37,795 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-03 08:59:37,795 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,795 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-03 08:59:37,799 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,799 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,799 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,799 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,799 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:37,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:37,799 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:37,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:37,800 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,800 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:37,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:37,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,801 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:37,801 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,801 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,801 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,802 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:37,802 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,802 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,804 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,804 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,804 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:37,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:37,805 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-03 08:59:37,805 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:37,805 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:37,805 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,805 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:37,805 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:37,805 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-03 08:59:47,805 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-03 08:59:47,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-03 08:59:47,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-03 08:59:47,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-03 08:59:47,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,816 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:47,816 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:47,816 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-03 08:59:47,817 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-03 08:59:47,817 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,817 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,818 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:47,818 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,818 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:47,819 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:47,819 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,819 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-03 08:59:47,819 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,819 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-03 08:59:47,820 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,820 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,820 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-03 08:59:47,820 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,820 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-03 08:59:47,820 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:47,820 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-03 08:59:47,820 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-03 08:59:47,821 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-03 08:59:47,821 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:47,821 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. started... 2023-06-03 08:59:47,821 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4ea52727b7bcc5aea73a03bcc34c035c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-03 08:59:47,831 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/9962a2709acc46dfbf3faec0fbf31366 2023-06-03 08:59:47,837 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/9962a2709acc46dfbf3faec0fbf31366 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9962a2709acc46dfbf3faec0fbf31366 2023-06-03 08:59:47,844 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9962a2709acc46dfbf3faec0fbf31366, entries=1, sequenceid=9, filesize=5.8 K 2023-06-03 08:59:47,845 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4ea52727b7bcc5aea73a03bcc34c035c in 24ms, sequenceid=9, compaction requested=false 2023-06-03 08:59:47,845 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 08:59:47,845 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:47,845 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-03 08:59:47,845 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-03 08:59:47,845 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,845 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-03 08:59:47,845 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-03 08:59:47,847 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,847 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:47,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:47,847 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,847 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-03 08:59:47,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:47,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:47,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:47,849 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-03 08:59:47,849 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@29694c4b[Count = 0] remaining members to acquire global barrier 2023-06-03 08:59:47,849 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-03 08:59:47,849 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,850 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,850 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,850 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,850 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-03 08:59:47,850 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-03 08:59:47,850 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38381,1685782756606' in zk 2023-06-03 08:59:47,850 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,850 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-03 08:59:47,853 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-03 08:59:47,853 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,853 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:47,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:47,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:47,853 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-03 08:59:47,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:47,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:47,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:47,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,855 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,855 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38381,1685782756606': 2023-06-03 08:59:47,855 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-03 08:59:47,855 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-03 08:59:47,855 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-03 08:59:47,855 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-03 08:59:47,855 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,855 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-03 08:59:47,857 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,857 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,857 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:47,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:47,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,857 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:47,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:47,857 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:47,857 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:47,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:47,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,859 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:47,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,862 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,862 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:47,862 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,862 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:47,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:47,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-03 08:59:47,862 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:47,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:47,862 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:47,862 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,863 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:47,863 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-03 08:59:47,863 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-03 08:59:47,863 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,863 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:47,863 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:47,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:47,863 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,863 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-03 08:59:57,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-03 08:59:57,883 INFO [Listener at localhost/36119] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782756990 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782797866 2023-06-03 08:59:57,883 DEBUG [Listener at localhost/36119] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37273,DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc,DISK], DatanodeInfoWithStorage[127.0.0.1:39333,DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f,DISK]] 2023-06-03 08:59:57,883 DEBUG [Listener at localhost/36119] wal.AbstractFSWAL(716): hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782756990 is not closed yet, will try archiving it next time 2023-06-03 08:59:57,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-03 08:59:57,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-03 08:59:57,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,894 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:57,894 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:57,895 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-03 08:59:57,895 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-03 08:59:57,895 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,895 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,897 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,897 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:57,897 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:57,897 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:57,897 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,897 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-03 08:59:57,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,898 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-03 08:59:57,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,898 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,898 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-03 08:59:57,899 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,899 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-03 08:59:57,899 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 08:59:57,899 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-03 08:59:57,899 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-03 08:59:57,899 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-03 08:59:57,899 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:57,899 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. started... 2023-06-03 08:59:57,899 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4ea52727b7bcc5aea73a03bcc34c035c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-03 08:59:57,913 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/66dd0ed1bb684b3f935cb07d805cc031 2023-06-03 08:59:57,919 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/66dd0ed1bb684b3f935cb07d805cc031 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/66dd0ed1bb684b3f935cb07d805cc031 2023-06-03 08:59:57,925 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/66dd0ed1bb684b3f935cb07d805cc031, entries=1, sequenceid=13, filesize=5.8 K 2023-06-03 08:59:57,926 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4ea52727b7bcc5aea73a03bcc34c035c in 27ms, sequenceid=13, compaction requested=true 2023-06-03 08:59:57,926 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 08:59:57,926 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 08:59:57,926 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-03 08:59:57,926 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-03 08:59:57,926 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,926 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-03 08:59:57,926 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-03 08:59:57,928 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,928 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:57,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:57,928 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,928 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-03 08:59:57,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:57,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:57,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,930 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:57,930 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-03 08:59:57,930 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@52a74890[Count = 0] remaining members to acquire global barrier 2023-06-03 08:59:57,930 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-03 08:59:57,930 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,931 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,931 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,931 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,931 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-03 08:59:57,931 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-03 08:59:57,931 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,931 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-03 08:59:57,931 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38381,1685782756606' in zk 2023-06-03 08:59:57,937 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-03 08:59:57,937 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,937 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:57,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:57,937 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-03 08:59:57,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:57,938 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:57,938 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:57,938 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:57,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,940 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38381,1685782756606': 2023-06-03 08:59:57,940 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-03 08:59:57,940 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-03 08:59:57,940 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-03 08:59:57,940 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-03 08:59:57,940 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,940 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-03 08:59:57,941 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,941 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,941 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,941 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 08:59:57,941 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 08:59:57,942 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:57,942 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 08:59:57,943 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,943 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,943 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,944 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 08:59:57,944 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,944 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,947 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 08:59:57,947 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 08:59:57,947 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 08:59:57,947 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 08:59:57,947 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 08:59:57,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 08:59:57,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,948 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-03 08:59:57,948 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-03 08:59:57,948 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:07,948 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-03 09:00:07,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-03 09:00:07,950 DEBUG [Listener at localhost/36119] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:00:07,954 DEBUG [Listener at localhost/36119] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:00:07,954 DEBUG [Listener at localhost/36119] regionserver.HStore(1912): 4ea52727b7bcc5aea73a03bcc34c035c/info is initiating minor compaction (all files) 2023-06-03 09:00:07,955 INFO [Listener at localhost/36119] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 09:00:07,955 INFO [Listener at localhost/36119] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:07,955 INFO [Listener at localhost/36119] regionserver.HRegion(2259): Starting compaction of 4ea52727b7bcc5aea73a03bcc34c035c/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:07,955 INFO [Listener at localhost/36119] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/74cd5687828a451cbb280cdaa433ebd4, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9962a2709acc46dfbf3faec0fbf31366, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/66dd0ed1bb684b3f935cb07d805cc031] into tmpdir=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp, totalSize=17.4 K 2023-06-03 09:00:07,955 DEBUG [Listener at localhost/36119] compactions.Compactor(207): Compacting 74cd5687828a451cbb280cdaa433ebd4, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685782777741 2023-06-03 09:00:07,956 DEBUG [Listener at localhost/36119] compactions.Compactor(207): Compacting 9962a2709acc46dfbf3faec0fbf31366, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685782787808 2023-06-03 09:00:07,956 DEBUG [Listener at localhost/36119] compactions.Compactor(207): Compacting 66dd0ed1bb684b3f935cb07d805cc031, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685782797865 2023-06-03 09:00:07,968 INFO [Listener at localhost/36119] throttle.PressureAwareThroughputController(145): 4ea52727b7bcc5aea73a03bcc34c035c#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:00:07,985 DEBUG [Listener at localhost/36119] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/2eb7cef0fd0a45548d18022b2e6a25d0 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/2eb7cef0fd0a45548d18022b2e6a25d0 2023-06-03 09:00:07,991 INFO [Listener at localhost/36119] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4ea52727b7bcc5aea73a03bcc34c035c/info of 4ea52727b7bcc5aea73a03bcc34c035c into 2eb7cef0fd0a45548d18022b2e6a25d0(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:00:07,991 DEBUG [Listener at localhost/36119] regionserver.HRegion(2289): Compaction status journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 09:00:08,004 INFO [Listener at localhost/36119] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782797866 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782807993 2023-06-03 09:00:08,005 DEBUG [Listener at localhost/36119] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37273,DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc,DISK], DatanodeInfoWithStorage[127.0.0.1:39333,DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f,DISK]] 2023-06-03 09:00:08,005 DEBUG [Listener at localhost/36119] wal.AbstractFSWAL(716): hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782797866 is not closed yet, will try archiving it next time 2023-06-03 09:00:08,005 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782756990 to hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/oldWALs/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782756990 2023-06-03 09:00:08,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-03 09:00:08,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-03 09:00:08,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,012 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 09:00:08,012 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 09:00:08,013 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-03 09:00:08,013 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-03 09:00:08,013 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,013 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,018 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,018 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 09:00:08,018 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 09:00:08,019 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 09:00:08,019 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,019 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-03 09:00:08,019 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,019 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,020 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-03 09:00:08,020 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,020 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,022 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-03 09:00:08,022 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,023 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-03 09:00:08,023 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-03 09:00:08,023 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-03 09:00:08,026 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-03 09:00:08,026 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-03 09:00:08,026 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:08,027 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. started... 2023-06-03 09:00:08,027 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4ea52727b7bcc5aea73a03bcc34c035c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-03 09:00:08,039 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/9616676bb9764117aef51a9785c7b47c 2023-06-03 09:00:08,045 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/9616676bb9764117aef51a9785c7b47c as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9616676bb9764117aef51a9785c7b47c 2023-06-03 09:00:08,050 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9616676bb9764117aef51a9785c7b47c, entries=1, sequenceid=18, filesize=5.8 K 2023-06-03 09:00:08,051 INFO [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4ea52727b7bcc5aea73a03bcc34c035c in 24ms, sequenceid=18, compaction requested=false 2023-06-03 09:00:08,052 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 09:00:08,052 DEBUG [rs(jenkins-hbase4.apache.org,38381,1685782756606)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:08,052 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-03 09:00:08,052 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-03 09:00:08,052 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,052 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-03 09:00:08,052 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-03 09:00:08,054 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,054 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 09:00:08,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 09:00:08,054 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,054 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-03 09:00:08,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 09:00:08,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 09:00:08,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,056 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,056 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 09:00:08,056 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38381,1685782756606' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-03 09:00:08,056 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@188ae372[Count = 0] remaining members to acquire global barrier 2023-06-03 09:00:08,056 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-03 09:00:08,056 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,057 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,057 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,057 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,057 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-03 09:00:08,058 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-03 09:00:08,058 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38381,1685782756606' in zk 2023-06-03 09:00:08,058 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,058 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-03 09:00:08,060 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,060 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-03 09:00:08,060 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,060 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 09:00:08,060 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 09:00:08,060 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 09:00:08,060 DEBUG [member: 'jenkins-hbase4.apache.org,38381,1685782756606' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-03 09:00:08,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 09:00:08,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 09:00:08,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 09:00:08,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,063 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38381,1685782756606': 2023-06-03 09:00:08,063 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38381,1685782756606' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-03 09:00:08,063 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-03 09:00:08,063 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-03 09:00:08,063 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-03 09:00:08,063 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,063 INFO [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-03 09:00:08,066 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,066 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,066 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 09:00:08,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-03 09:00:08,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-03 09:00:08,066 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,066 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 09:00:08,066 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 09:00:08,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-03 09:00:08,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-03 09:00:08,069 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,069 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,069 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,069 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-03 09:00:08,070 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,071 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,073 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,073 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-03 09:00:08,073 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-03 09:00:08,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-03 09:00:08,073 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,073 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-03 09:00:08,073 DEBUG [(jenkins-hbase4.apache.org,38191,1685782756566)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-03 09:00:08,074 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-03 09:00:08,074 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-03 09:00:08,073 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-03 09:00:08,074 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-03 09:00:08,074 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 09:00:08,074 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:08,075 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,075 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,075 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-03 09:00:08,075 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-03 09:00:08,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 09:00:18,075 DEBUG [Listener at localhost/36119] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-03 09:00:18,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38191] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-03 09:00:18,086 INFO [Listener at localhost/36119] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782807993 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782818078 2023-06-03 09:00:18,086 DEBUG [Listener at localhost/36119] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39333,DS-e21efd32-e505-463e-87c1-8dcd0c0b9b0f,DISK], DatanodeInfoWithStorage[127.0.0.1:37273,DS-5e001058-56e3-4ae7-b350-ab9337ecd6fc,DISK]] 2023-06-03 09:00:18,086 DEBUG [Listener at localhost/36119] wal.AbstractFSWAL(716): hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782807993 is not closed yet, will try archiving it next time 2023-06-03 09:00:18,086 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-03 09:00:18,086 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782797866 to hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/oldWALs/jenkins-hbase4.apache.org%2C38381%2C1685782756606.1685782797866 2023-06-03 09:00:18,086 INFO [Listener at localhost/36119] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-03 09:00:18,086 DEBUG [Listener at localhost/36119] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x59561662 to 127.0.0.1:54897 2023-06-03 09:00:18,088 DEBUG [Listener at localhost/36119] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:00:18,088 DEBUG [Listener at localhost/36119] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-03 09:00:18,089 DEBUG [Listener at localhost/36119] util.JVMClusterUtil(257): Found active master hash=1496322615, stopped=false 2023-06-03 09:00:18,089 INFO [Listener at localhost/36119] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 09:00:18,091 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 09:00:18,091 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 09:00:18,091 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:18,091 INFO [Listener at localhost/36119] procedure2.ProcedureExecutor(629): Stopping 2023-06-03 09:00:18,091 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:00:18,091 DEBUG [Listener at localhost/36119] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49b03962 to 127.0.0.1:54897 2023-06-03 09:00:18,092 DEBUG [Listener at localhost/36119] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:00:18,092 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:00:18,092 INFO [Listener at localhost/36119] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38381,1685782756606' ***** 2023-06-03 09:00:18,092 INFO [Listener at localhost/36119] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 09:00:18,092 INFO [RS:0;jenkins-hbase4:38381] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 09:00:18,093 INFO [RS:0;jenkins-hbase4:38381] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 09:00:18,093 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 09:00:18,093 INFO [RS:0;jenkins-hbase4:38381] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 09:00:18,093 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(3303): Received CLOSE for 0ceb2680ab57bf9060ac6ed353634830 2023-06-03 09:00:18,093 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(3303): Received CLOSE for 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 09:00:18,093 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:18,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0ceb2680ab57bf9060ac6ed353634830, disabling compactions & flushes 2023-06-03 09:00:18,093 DEBUG [RS:0;jenkins-hbase4:38381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x59a7a632 to 127.0.0.1:54897 2023-06-03 09:00:18,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 09:00:18,093 DEBUG [RS:0;jenkins-hbase4:38381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:00:18,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 09:00:18,094 INFO [RS:0;jenkins-hbase4:38381] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 09:00:18,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. after waiting 0 ms 2023-06-03 09:00:18,094 INFO [RS:0;jenkins-hbase4:38381] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 09:00:18,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 09:00:18,094 INFO [RS:0;jenkins-hbase4:38381] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 09:00:18,094 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 09:00:18,094 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-03 09:00:18,094 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1478): Online Regions={0ceb2680ab57bf9060ac6ed353634830=hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830., 4ea52727b7bcc5aea73a03bcc34c035c=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c., 1588230740=hbase:meta,,1.1588230740} 2023-06-03 09:00:18,094 DEBUG [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1504): Waiting on 0ceb2680ab57bf9060ac6ed353634830, 1588230740, 4ea52727b7bcc5aea73a03bcc34c035c 2023-06-03 09:00:18,095 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 09:00:18,095 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 09:00:18,095 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 09:00:18,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 09:00:18,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 09:00:18,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-03 09:00:18,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/namespace/0ceb2680ab57bf9060ac6ed353634830/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-03 09:00:18,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 09:00:18,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0ceb2680ab57bf9060ac6ed353634830: 2023-06-03 09:00:18,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685782757148.0ceb2680ab57bf9060ac6ed353634830. 2023-06-03 09:00:18,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4ea52727b7bcc5aea73a03bcc34c035c, disabling compactions & flushes 2023-06-03 09:00:18,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:18,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:18,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. after waiting 0 ms 2023-06-03 09:00:18,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:18,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4ea52727b7bcc5aea73a03bcc34c035c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-03 09:00:18,113 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/.tmp/info/54f4fac2bd104e7395c87c885c68d9c8 2023-06-03 09:00:18,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/c49a17c12f1546d3951cf01c80a53cc2 2023-06-03 09:00:18,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/.tmp/info/c49a17c12f1546d3951cf01c80a53cc2 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/c49a17c12f1546d3951cf01c80a53cc2 2023-06-03 09:00:18,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/c49a17c12f1546d3951cf01c80a53cc2, entries=1, sequenceid=22, filesize=5.8 K 2023-06-03 09:00:18,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4ea52727b7bcc5aea73a03bcc34c035c in 33ms, sequenceid=22, compaction requested=true 2023-06-03 09:00:18,145 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/74cd5687828a451cbb280cdaa433ebd4, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9962a2709acc46dfbf3faec0fbf31366, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/66dd0ed1bb684b3f935cb07d805cc031] to archive 2023-06-03 09:00:18,146 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-03 09:00:18,149 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/.tmp/table/3207d3859f5b4f9a80ebf77961a0517d 2023-06-03 09:00:18,149 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/74cd5687828a451cbb280cdaa433ebd4 to hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/74cd5687828a451cbb280cdaa433ebd4 2023-06-03 09:00:18,153 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9962a2709acc46dfbf3faec0fbf31366 to hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/9962a2709acc46dfbf3faec0fbf31366 2023-06-03 09:00:18,155 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/66dd0ed1bb684b3f935cb07d805cc031 to hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/info/66dd0ed1bb684b3f935cb07d805cc031 2023-06-03 09:00:18,163 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/.tmp/info/54f4fac2bd104e7395c87c885c68d9c8 as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/info/54f4fac2bd104e7395c87c885c68d9c8 2023-06-03 09:00:18,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4ea52727b7bcc5aea73a03bcc34c035c/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-03 09:00:18,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:18,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4ea52727b7bcc5aea73a03bcc34c035c: 2023-06-03 09:00:18,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685782757641.4ea52727b7bcc5aea73a03bcc34c035c. 2023-06-03 09:00:18,170 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/info/54f4fac2bd104e7395c87c885c68d9c8, entries=20, sequenceid=14, filesize=7.6 K 2023-06-03 09:00:18,171 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/.tmp/table/3207d3859f5b4f9a80ebf77961a0517d as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/table/3207d3859f5b4f9a80ebf77961a0517d 2023-06-03 09:00:18,176 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/table/3207d3859f5b4f9a80ebf77961a0517d, entries=4, sequenceid=14, filesize=4.9 K 2023-06-03 09:00:18,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 81ms, sequenceid=14, compaction requested=false 2023-06-03 09:00:18,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-03 09:00:18,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-03 09:00:18,184 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 09:00:18,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 09:00:18,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-03 09:00:18,294 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38381,1685782756606; all regions closed. 2023-06-03 09:00:18,295 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:18,301 DEBUG [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/oldWALs 2023-06-03 09:00:18,301 INFO [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C38381%2C1685782756606.meta:.meta(num 1685782757098) 2023-06-03 09:00:18,302 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/WALs/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:18,307 DEBUG [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/oldWALs 2023-06-03 09:00:18,307 INFO [RS:0;jenkins-hbase4:38381] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C38381%2C1685782756606:(num 1685782818078) 2023-06-03 09:00:18,307 DEBUG [RS:0;jenkins-hbase4:38381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:00:18,307 INFO [RS:0;jenkins-hbase4:38381] regionserver.LeaseManager(133): Closed leases 2023-06-03 09:00:18,307 INFO [RS:0;jenkins-hbase4:38381] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-03 09:00:18,307 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 09:00:18,308 INFO [RS:0;jenkins-hbase4:38381] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38381 2023-06-03 09:00:18,313 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38381,1685782756606 2023-06-03 09:00:18,313 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:00:18,313 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:00:18,314 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38381,1685782756606] 2023-06-03 09:00:18,314 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38381,1685782756606; numProcessing=1 2023-06-03 09:00:18,315 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38381,1685782756606 already deleted, retry=false 2023-06-03 09:00:18,316 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38381,1685782756606 expired; onlineServers=0 2023-06-03 09:00:18,316 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38191,1685782756566' ***** 2023-06-03 09:00:18,316 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-03 09:00:18,316 DEBUG [M:0;jenkins-hbase4:38191] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d0f8dc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 09:00:18,316 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 09:00:18,316 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38191,1685782756566; all regions closed. 2023-06-03 09:00:18,316 DEBUG [M:0;jenkins-hbase4:38191] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:00:18,316 DEBUG [M:0;jenkins-hbase4:38191] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-03 09:00:18,316 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-03 09:00:18,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782756743] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782756743,5,FailOnTimeoutGroup] 2023-06-03 09:00:18,316 DEBUG [M:0;jenkins-hbase4:38191] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-03 09:00:18,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782756748] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782756748,5,FailOnTimeoutGroup] 2023-06-03 09:00:18,317 INFO [M:0;jenkins-hbase4:38191] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-03 09:00:18,318 INFO [M:0;jenkins-hbase4:38191] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-03 09:00:18,318 INFO [M:0;jenkins-hbase4:38191] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-03 09:00:18,318 DEBUG [M:0;jenkins-hbase4:38191] master.HMaster(1512): Stopping service threads 2023-06-03 09:00:18,318 INFO [M:0;jenkins-hbase4:38191] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-03 09:00:18,318 ERROR [M:0;jenkins-hbase4:38191] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-03 09:00:18,319 INFO [M:0;jenkins-hbase4:38191] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-03 09:00:18,319 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-03 09:00:18,319 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-03 09:00:18,319 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:18,319 DEBUG [M:0;jenkins-hbase4:38191] zookeeper.ZKUtil(398): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-03 09:00:18,319 WARN [M:0;jenkins-hbase4:38191] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-03 09:00:18,319 INFO [M:0;jenkins-hbase4:38191] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-03 09:00:18,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:00:18,319 INFO [M:0;jenkins-hbase4:38191] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-03 09:00:18,320 DEBUG [M:0;jenkins-hbase4:38191] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 09:00:18,320 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:18,320 DEBUG [M:0;jenkins-hbase4:38191] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:18,320 DEBUG [M:0;jenkins-hbase4:38191] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 09:00:18,320 DEBUG [M:0;jenkins-hbase4:38191] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:18,320 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-06-03 09:00:18,333 INFO [M:0;jenkins-hbase4:38191] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/690e6eb616e54e1d9bf747ba04d7597e 2023-06-03 09:00:18,337 INFO [M:0;jenkins-hbase4:38191] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 690e6eb616e54e1d9bf747ba04d7597e 2023-06-03 09:00:18,338 DEBUG [M:0;jenkins-hbase4:38191] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/690e6eb616e54e1d9bf747ba04d7597e as hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/690e6eb616e54e1d9bf747ba04d7597e 2023-06-03 09:00:18,343 INFO [M:0;jenkins-hbase4:38191] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 690e6eb616e54e1d9bf747ba04d7597e 2023-06-03 09:00:18,343 INFO [M:0;jenkins-hbase4:38191] regionserver.HStore(1080): Added hdfs://localhost:37159/user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/690e6eb616e54e1d9bf747ba04d7597e, entries=11, sequenceid=100, filesize=6.1 K 2023-06-03 09:00:18,344 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=100, compaction requested=false 2023-06-03 09:00:18,345 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:18,345 DEBUG [M:0;jenkins-hbase4:38191] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:00:18,346 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f34ab83-94fc-7c9d-dda7-8b7ed6cb3c8c/MasterData/WALs/jenkins-hbase4.apache.org,38191,1685782756566 2023-06-03 09:00:18,348 INFO [M:0;jenkins-hbase4:38191] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-03 09:00:18,348 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 09:00:18,349 INFO [M:0;jenkins-hbase4:38191] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38191 2023-06-03 09:00:18,351 DEBUG [M:0;jenkins-hbase4:38191] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38191,1685782756566 already deleted, retry=false 2023-06-03 09:00:18,414 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:00:18,414 INFO [RS:0;jenkins-hbase4:38381] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38381,1685782756606; zookeeper connection closed. 2023-06-03 09:00:18,414 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): regionserver:38381-0x1008fe9ffc90001, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:00:18,415 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4a165d87] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4a165d87 2023-06-03 09:00:18,415 INFO [Listener at localhost/36119] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-03 09:00:18,514 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:00:18,514 INFO [M:0;jenkins-hbase4:38191] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38191,1685782756566; zookeeper connection closed. 2023-06-03 09:00:18,514 DEBUG [Listener at localhost/36119-EventThread] zookeeper.ZKWatcher(600): master:38191-0x1008fe9ffc90000, quorum=127.0.0.1:54897, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:00:18,515 WARN [Listener at localhost/36119] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 09:00:18,519 INFO [Listener at localhost/36119] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:00:18,524 WARN [BP-1274617576-172.31.14.131-1685782756017 heartbeating to localhost/127.0.0.1:37159] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1274617576-172.31.14.131-1685782756017 (Datanode Uuid 0dc83e59-f68c-4609-bc44-ad1197386834) service to localhost/127.0.0.1:37159 2023-06-03 09:00:18,525 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/dfs/data/data3/current/BP-1274617576-172.31.14.131-1685782756017] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:00:18,525 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/dfs/data/data4/current/BP-1274617576-172.31.14.131-1685782756017] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:00:18,625 WARN [Listener at localhost/36119] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 09:00:18,628 INFO [Listener at localhost/36119] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:00:18,733 WARN [BP-1274617576-172.31.14.131-1685782756017 heartbeating to localhost/127.0.0.1:37159] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 09:00:18,733 WARN [BP-1274617576-172.31.14.131-1685782756017 heartbeating to localhost/127.0.0.1:37159] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1274617576-172.31.14.131-1685782756017 (Datanode Uuid 91162933-e246-4235-932b-f99164063272) service to localhost/127.0.0.1:37159 2023-06-03 09:00:18,733 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/dfs/data/data1/current/BP-1274617576-172.31.14.131-1685782756017] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:00:18,734 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/cluster_6f9ccffd-83cd-042f-afc3-b0a25e59bab4/dfs/data/data2/current/BP-1274617576-172.31.14.131-1685782756017] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:00:18,746 INFO [Listener at localhost/36119] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:00:18,857 INFO [Listener at localhost/36119] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-03 09:00:18,860 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-03 09:00:18,872 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-03 09:00:18,882 INFO [Listener at localhost/36119] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=93 (was 86) - Thread LEAK? -, OpenFileDescriptor=500 (was 465) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=35 (was 37), ProcessCount=169 (was 169), AvailableMemoryMB=1023 (was 1209) 2023-06-03 09:00:18,890 INFO [Listener at localhost/36119] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=94, OpenFileDescriptor=500, MaxFileDescriptor=60000, SystemLoadAverage=35, ProcessCount=169, AvailableMemoryMB=1022 2023-06-03 09:00:18,890 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-03 09:00:18,890 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/hadoop.log.dir so I do NOT create it in target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc 2023-06-03 09:00:18,890 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b727e5d-f706-1584-cb37-da793fdba3a4/hadoop.tmp.dir so I do NOT create it in target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc 2023-06-03 09:00:18,890 INFO [Listener at localhost/36119] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02, deleteOnExit=true 2023-06-03 09:00:18,890 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/test.cache.data in system properties and HBase conf 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/hadoop.tmp.dir in system properties and HBase conf 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/hadoop.log.dir in system properties and HBase conf 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-03 09:00:18,891 DEBUG [Listener at localhost/36119] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-03 09:00:18,891 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/nfs.dump.dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/java.io.tmpdir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 09:00:18,892 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-03 09:00:18,893 INFO [Listener at localhost/36119] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-03 09:00:18,894 WARN [Listener at localhost/36119] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 09:00:18,897 WARN [Listener at localhost/36119] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 09:00:18,897 WARN [Listener at localhost/36119] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 09:00:18,934 WARN [Listener at localhost/36119] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 09:00:18,935 INFO [Listener at localhost/36119] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 09:00:18,940 INFO [Listener at localhost/36119] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/java.io.tmpdir/Jetty_localhost_38185_hdfs____3d9z4j/webapp 2023-06-03 09:00:19,029 INFO [Listener at localhost/36119] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38185 2023-06-03 09:00:19,030 WARN [Listener at localhost/36119] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 09:00:19,033 WARN [Listener at localhost/36119] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 09:00:19,033 WARN [Listener at localhost/36119] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 09:00:19,068 WARN [Listener at localhost/34905] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 09:00:19,078 WARN [Listener at localhost/34905] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 09:00:19,080 WARN [Listener at localhost/34905] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 09:00:19,081 INFO [Listener at localhost/34905] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 09:00:19,085 INFO [Listener at localhost/34905] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/java.io.tmpdir/Jetty_localhost_33125_datanode____keof5n/webapp 2023-06-03 09:00:19,175 INFO [Listener at localhost/34905] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33125 2023-06-03 09:00:19,180 WARN [Listener at localhost/44645] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 09:00:19,191 WARN [Listener at localhost/44645] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 09:00:19,193 WARN [Listener at localhost/44645] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 09:00:19,194 INFO [Listener at localhost/44645] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 09:00:19,197 INFO [Listener at localhost/44645] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/java.io.tmpdir/Jetty_localhost_42431_datanode____.4zg539/webapp 2023-06-03 09:00:19,282 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x63c0007d4e4e1353: Processing first storage report for DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7 from datanode 97b1e5c9-1a6e-48de-a813-d915e254033f 2023-06-03 09:00:19,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x63c0007d4e4e1353: from storage DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7 node DatanodeRegistration(127.0.0.1:43135, datanodeUuid=97b1e5c9-1a6e-48de-a813-d915e254033f, infoPort=34637, infoSecurePort=0, ipcPort=44645, storageInfo=lv=-57;cid=testClusterID;nsid=2118963255;c=1685782818899), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:00:19,282 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x63c0007d4e4e1353: Processing first storage report for DS-aac1de49-ce10-45c2-90f6-c1b1872737b1 from datanode 97b1e5c9-1a6e-48de-a813-d915e254033f 2023-06-03 09:00:19,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x63c0007d4e4e1353: from storage DS-aac1de49-ce10-45c2-90f6-c1b1872737b1 node DatanodeRegistration(127.0.0.1:43135, datanodeUuid=97b1e5c9-1a6e-48de-a813-d915e254033f, infoPort=34637, infoSecurePort=0, ipcPort=44645, storageInfo=lv=-57;cid=testClusterID;nsid=2118963255;c=1685782818899), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:00:19,295 INFO [Listener at localhost/44645] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42431 2023-06-03 09:00:19,301 WARN [Listener at localhost/35195] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 09:00:19,390 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd2d4e91b6d027a4d: Processing first storage report for DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1 from datanode a310907c-28e6-4c2d-bc18-c67838541530 2023-06-03 09:00:19,390 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd2d4e91b6d027a4d: from storage DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1 node DatanodeRegistration(127.0.0.1:41335, datanodeUuid=a310907c-28e6-4c2d-bc18-c67838541530, infoPort=44675, infoSecurePort=0, ipcPort=35195, storageInfo=lv=-57;cid=testClusterID;nsid=2118963255;c=1685782818899), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:00:19,390 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd2d4e91b6d027a4d: Processing first storage report for DS-a39cc335-1e38-4217-9c7b-d71da649dd3f from datanode a310907c-28e6-4c2d-bc18-c67838541530 2023-06-03 09:00:19,390 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd2d4e91b6d027a4d: from storage DS-a39cc335-1e38-4217-9c7b-d71da649dd3f node DatanodeRegistration(127.0.0.1:41335, datanodeUuid=a310907c-28e6-4c2d-bc18-c67838541530, infoPort=44675, infoSecurePort=0, ipcPort=35195, storageInfo=lv=-57;cid=testClusterID;nsid=2118963255;c=1685782818899), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:00:19,408 DEBUG [Listener at localhost/35195] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc 2023-06-03 09:00:19,410 INFO [Listener at localhost/35195] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/zookeeper_0, clientPort=64598, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-03 09:00:19,411 INFO [Listener at localhost/35195] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64598 2023-06-03 09:00:19,411 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,412 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,424 INFO [Listener at localhost/35195] util.FSUtils(471): Created version file at hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5 with version=8 2023-06-03 09:00:19,424 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase-staging 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 09:00:19,426 INFO [Listener at localhost/35195] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 09:00:19,427 INFO [Listener at localhost/35195] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44421 2023-06-03 09:00:19,428 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,428 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,429 INFO [Listener at localhost/35195] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44421 connecting to ZooKeeper ensemble=127.0.0.1:64598 2023-06-03 09:00:19,438 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:444210x0, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 09:00:19,439 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44421-0x1008feaf5540000 connected 2023-06-03 09:00:19,452 DEBUG [Listener at localhost/35195] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:00:19,452 DEBUG [Listener at localhost/35195] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:00:19,452 DEBUG [Listener at localhost/35195] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 09:00:19,453 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44421 2023-06-03 09:00:19,453 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44421 2023-06-03 09:00:19,453 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44421 2023-06-03 09:00:19,453 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44421 2023-06-03 09:00:19,454 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44421 2023-06-03 09:00:19,454 INFO [Listener at localhost/35195] master.HMaster(444): hbase.rootdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5, hbase.cluster.distributed=false 2023-06-03 09:00:19,466 INFO [Listener at localhost/35195] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 09:00:19,466 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:00:19,466 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 09:00:19,466 INFO [Listener at localhost/35195] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 09:00:19,466 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:00:19,466 INFO [Listener at localhost/35195] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 09:00:19,467 INFO [Listener at localhost/35195] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 09:00:19,468 INFO [Listener at localhost/35195] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37577 2023-06-03 09:00:19,468 INFO [Listener at localhost/35195] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 09:00:19,469 DEBUG [Listener at localhost/35195] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 09:00:19,469 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,470 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,471 INFO [Listener at localhost/35195] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37577 connecting to ZooKeeper ensemble=127.0.0.1:64598 2023-06-03 09:00:19,474 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:375770x0, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 09:00:19,475 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37577-0x1008feaf5540001 connected 2023-06-03 09:00:19,475 DEBUG [Listener at localhost/35195] zookeeper.ZKUtil(164): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:00:19,475 DEBUG [Listener at localhost/35195] zookeeper.ZKUtil(164): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:00:19,476 DEBUG [Listener at localhost/35195] zookeeper.ZKUtil(164): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 09:00:19,476 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37577 2023-06-03 09:00:19,476 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37577 2023-06-03 09:00:19,477 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37577 2023-06-03 09:00:19,477 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37577 2023-06-03 09:00:19,477 DEBUG [Listener at localhost/35195] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37577 2023-06-03 09:00:19,478 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,480 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 09:00:19,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,482 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 09:00:19,482 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 09:00:19,482 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 09:00:19,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44421,1685782819425 from backup master directory 2023-06-03 09:00:19,484 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 09:00:19,485 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,485 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 09:00:19,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,485 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 09:00:19,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/hbase.id with ID: 881969b1-9f7b-4a6a-8c3c-82cbb9114e7f 2023-06-03 09:00:19,509 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:19,512 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x10efb0e0 to 127.0.0.1:64598 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 09:00:19,522 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ff88552, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 09:00:19,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 09:00:19,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-03 09:00:19,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:00:19,524 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store-tmp 2023-06-03 09:00:19,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:19,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 09:00:19,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:19,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:19,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 09:00:19,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:19,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:00:19,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:00:19,532 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/WALs/jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44421%2C1685782819425, suffix=, logDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/WALs/jenkins-hbase4.apache.org,44421,1685782819425, archiveDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/oldWALs, maxLogs=10 2023-06-03 09:00:19,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/WALs/jenkins-hbase4.apache.org,44421,1685782819425/jenkins-hbase4.apache.org%2C44421%2C1685782819425.1685782819534 2023-06-03 09:00:19,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43135,DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7,DISK]] 2023-06-03 09:00:19,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:00:19,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:19,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:00:19,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:00:19,545 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:00:19,547 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-03 09:00:19,547 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-03 09:00:19,547 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:00:19,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:00:19,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:00:19,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:00:19,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=751296, jitterRate=-0.0446782261133194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:00:19,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:00:19,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-03 09:00:19,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-03 09:00:19,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-03 09:00:19,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-03 09:00:19,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-03 09:00:19,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-03 09:00:19,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-03 09:00:19,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-03 09:00:19,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-03 09:00:19,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-03 09:00:19,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-03 09:00:19,567 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-03 09:00:19,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-03 09:00:19,568 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-03 09:00:19,571 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-03 09:00:19,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-03 09:00:19,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-03 09:00:19,573 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 09:00:19,573 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 09:00:19,573 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44421,1685782819425, sessionid=0x1008feaf5540000, setting cluster-up flag (Was=false) 2023-06-03 09:00:19,577 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-03 09:00:19,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,585 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-03 09:00:19,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:19,590 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.hbase-snapshot/.tmp 2023-06-03 09:00:19,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-03 09:00:19,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:00:19,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:00:19,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:00:19,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:00:19,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-03 09:00:19,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 09:00:19,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685782849593 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-03 09:00:19,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-03 09:00:19,595 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 09:00:19,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-03 09:00:19,595 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-03 09:00:19,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-03 09:00:19,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-03 09:00:19,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-03 09:00:19,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782819595,5,FailOnTimeoutGroup] 2023-06-03 09:00:19,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782819596,5,FailOnTimeoutGroup] 2023-06-03 09:00:19,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-03 09:00:19,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,596 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 09:00:19,607 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 09:00:19,607 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 09:00:19,607 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5 2023-06-03 09:00:19,614 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:19,618 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 09:00:19,619 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info 2023-06-03 09:00:19,619 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 09:00:19,620 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,620 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 09:00:19,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/rep_barrier 2023-06-03 09:00:19,621 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 09:00:19,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 09:00:19,623 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/table 2023-06-03 09:00:19,623 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 09:00:19,624 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,624 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740 2023-06-03 09:00:19,625 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740 2023-06-03 09:00:19,626 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 09:00:19,627 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 09:00:19,630 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:00:19,630 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=877008, jitterRate=0.11517360806465149}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 09:00:19,630 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 09:00:19,630 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 09:00:19,630 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 09:00:19,630 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 09:00:19,630 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 09:00:19,630 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 09:00:19,632 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 09:00:19,632 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 09:00:19,633 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 09:00:19,633 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-03 09:00:19,633 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-03 09:00:19,634 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-03 09:00:19,635 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-03 09:00:19,679 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(951): ClusterId : 881969b1-9f7b-4a6a-8c3c-82cbb9114e7f 2023-06-03 09:00:19,680 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 09:00:19,683 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 09:00:19,683 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 09:00:19,685 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 09:00:19,685 DEBUG [RS:0;jenkins-hbase4:37577] zookeeper.ReadOnlyZKClient(139): Connect 0x65b69b26 to 127.0.0.1:64598 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 09:00:19,689 DEBUG [RS:0;jenkins-hbase4:37577] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b7169e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 09:00:19,689 DEBUG [RS:0;jenkins-hbase4:37577] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f4ef056, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 09:00:19,698 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37577 2023-06-03 09:00:19,698 INFO [RS:0;jenkins-hbase4:37577] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 09:00:19,698 INFO [RS:0;jenkins-hbase4:37577] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 09:00:19,698 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 09:00:19,698 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44421,1685782819425 with isa=jenkins-hbase4.apache.org/172.31.14.131:37577, startcode=1685782819466 2023-06-03 09:00:19,699 DEBUG [RS:0;jenkins-hbase4:37577] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 09:00:19,701 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40769, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 09:00:19,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,703 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5 2023-06-03 09:00:19,703 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34905 2023-06-03 09:00:19,703 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 09:00:19,705 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:00:19,705 DEBUG [RS:0;jenkins-hbase4:37577] zookeeper.ZKUtil(162): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,705 WARN [RS:0;jenkins-hbase4:37577] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 09:00:19,705 INFO [RS:0;jenkins-hbase4:37577] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:00:19,705 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1946): logDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,706 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37577,1685782819466] 2023-06-03 09:00:19,709 DEBUG [RS:0;jenkins-hbase4:37577] zookeeper.ZKUtil(162): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,710 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 09:00:19,710 INFO [RS:0;jenkins-hbase4:37577] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 09:00:19,714 INFO [RS:0;jenkins-hbase4:37577] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 09:00:19,715 INFO [RS:0;jenkins-hbase4:37577] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 09:00:19,715 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,715 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 09:00:19,716 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,716 DEBUG [RS:0;jenkins-hbase4:37577] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:00:19,717 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,717 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,717 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,728 INFO [RS:0;jenkins-hbase4:37577] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 09:00:19,728 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37577,1685782819466-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:19,738 INFO [RS:0;jenkins-hbase4:37577] regionserver.Replication(203): jenkins-hbase4.apache.org,37577,1685782819466 started 2023-06-03 09:00:19,738 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37577,1685782819466, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37577, sessionid=0x1008feaf5540001 2023-06-03 09:00:19,738 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 09:00:19,738 DEBUG [RS:0;jenkins-hbase4:37577] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,738 DEBUG [RS:0;jenkins-hbase4:37577] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37577,1685782819466' 2023-06-03 09:00:19,738 DEBUG [RS:0;jenkins-hbase4:37577] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37577,1685782819466' 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 09:00:19,739 DEBUG [RS:0;jenkins-hbase4:37577] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 09:00:19,740 DEBUG [RS:0;jenkins-hbase4:37577] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 09:00:19,740 INFO [RS:0;jenkins-hbase4:37577] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 09:00:19,740 INFO [RS:0;jenkins-hbase4:37577] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 09:00:19,785 DEBUG [jenkins-hbase4:44421] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-03 09:00:19,786 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37577,1685782819466, state=OPENING 2023-06-03 09:00:19,788 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-03 09:00:19,789 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:19,790 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 09:00:19,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37577,1685782819466}] 2023-06-03 09:00:19,841 INFO [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37577%2C1685782819466, suffix=, logDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466, archiveDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/oldWALs, maxLogs=32 2023-06-03 09:00:19,849 INFO [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782819842 2023-06-03 09:00:19,849 DEBUG [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43135,DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7,DISK]] 2023-06-03 09:00:19,944 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:19,944 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-03 09:00:19,946 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-03 09:00:19,949 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-03 09:00:19,949 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:00:19,951 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37577%2C1685782819466.meta, suffix=.meta, logDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466, archiveDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/oldWALs, maxLogs=32 2023-06-03 09:00:19,961 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.meta.1685782819952.meta 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43135,DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7,DISK]] 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-03 09:00:19,961 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-03 09:00:19,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-03 09:00:19,963 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 09:00:19,963 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info 2023-06-03 09:00:19,964 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info 2023-06-03 09:00:19,964 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 09:00:19,964 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,965 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 09:00:19,965 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/rep_barrier 2023-06-03 09:00:19,965 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/rep_barrier 2023-06-03 09:00:19,966 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 09:00:19,966 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,966 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 09:00:19,967 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/table 2023-06-03 09:00:19,967 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/table 2023-06-03 09:00:19,967 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 09:00:19,968 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:19,968 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740 2023-06-03 09:00:19,969 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740 2023-06-03 09:00:19,971 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 09:00:19,973 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 09:00:19,973 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=690493, jitterRate=-0.12199360132217407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 09:00:19,973 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 09:00:19,975 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685782819944 2023-06-03 09:00:19,978 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-03 09:00:19,979 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-03 09:00:19,980 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37577,1685782819466, state=OPEN 2023-06-03 09:00:19,981 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-03 09:00:19,982 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 09:00:19,984 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-03 09:00:19,984 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37577,1685782819466 in 191 msec 2023-06-03 09:00:19,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-03 09:00:19,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 351 msec 2023-06-03 09:00:19,988 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 395 msec 2023-06-03 09:00:19,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685782819988, completionTime=-1 2023-06-03 09:00:19,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-03 09:00:19,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-03 09:00:19,990 DEBUG [hconnection-0x6afe0047-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 09:00:19,993 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36640, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 09:00:19,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-03 09:00:19,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685782879994 2023-06-03 09:00:19,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685782939994 2023-06-03 09:00:19,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-03 09:00:20,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1685782819425-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:20,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1685782819425-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:20,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1685782819425-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:20,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44421, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:20,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-03 09:00:20,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-03 09:00:20,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 09:00:20,002 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-03 09:00:20,002 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-03 09:00:20,004 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 09:00:20,004 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 09:00:20,006 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,006 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19 empty. 2023-06-03 09:00:20,007 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,007 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-03 09:00:20,018 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-03 09:00:20,019 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f0596de6e6fa870e80124c219e871a19, NAME => 'hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp 2023-06-03 09:00:20,027 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:20,027 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f0596de6e6fa870e80124c219e871a19, disabling compactions & flushes 2023-06-03 09:00:20,027 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,027 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,027 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. after waiting 0 ms 2023-06-03 09:00:20,027 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,027 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,027 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f0596de6e6fa870e80124c219e871a19: 2023-06-03 09:00:20,029 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 09:00:20,030 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782820030"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782820030"}]},"ts":"1685782820030"} 2023-06-03 09:00:20,032 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 09:00:20,033 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 09:00:20,034 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782820033"}]},"ts":"1685782820033"} 2023-06-03 09:00:20,035 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-03 09:00:20,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f0596de6e6fa870e80124c219e871a19, ASSIGN}] 2023-06-03 09:00:20,042 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f0596de6e6fa870e80124c219e871a19, ASSIGN 2023-06-03 09:00:20,043 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f0596de6e6fa870e80124c219e871a19, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37577,1685782819466; forceNewPlan=false, retain=false 2023-06-03 09:00:20,194 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f0596de6e6fa870e80124c219e871a19, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:20,194 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782820194"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782820194"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782820194"}]},"ts":"1685782820194"} 2023-06-03 09:00:20,196 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure f0596de6e6fa870e80124c219e871a19, server=jenkins-hbase4.apache.org,37577,1685782819466}] 2023-06-03 09:00:20,352 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f0596de6e6fa870e80124c219e871a19, NAME => 'hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:00:20,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:20,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,354 INFO [StoreOpener-f0596de6e6fa870e80124c219e871a19-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,355 DEBUG [StoreOpener-f0596de6e6fa870e80124c219e871a19-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/info 2023-06-03 09:00:20,355 DEBUG [StoreOpener-f0596de6e6fa870e80124c219e871a19-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/info 2023-06-03 09:00:20,355 INFO [StoreOpener-f0596de6e6fa870e80124c219e871a19-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f0596de6e6fa870e80124c219e871a19 columnFamilyName info 2023-06-03 09:00:20,356 INFO [StoreOpener-f0596de6e6fa870e80124c219e871a19-1] regionserver.HStore(310): Store=f0596de6e6fa870e80124c219e871a19/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:20,357 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,357 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f0596de6e6fa870e80124c219e871a19 2023-06-03 09:00:20,362 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:00:20,363 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f0596de6e6fa870e80124c219e871a19; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=783352, jitterRate=-0.003916576504707336}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:00:20,363 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f0596de6e6fa870e80124c219e871a19: 2023-06-03 09:00:20,365 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19., pid=6, masterSystemTime=1685782820348 2023-06-03 09:00:20,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,367 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:00:20,368 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f0596de6e6fa870e80124c219e871a19, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:20,368 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782820367"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782820367"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782820367"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782820367"}]},"ts":"1685782820367"} 2023-06-03 09:00:20,372 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-03 09:00:20,372 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure f0596de6e6fa870e80124c219e871a19, server=jenkins-hbase4.apache.org,37577,1685782819466 in 173 msec 2023-06-03 09:00:20,374 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-03 09:00:20,374 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f0596de6e6fa870e80124c219e871a19, ASSIGN in 332 msec 2023-06-03 09:00:20,375 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 09:00:20,375 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782820375"}]},"ts":"1685782820375"} 2023-06-03 09:00:20,377 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-03 09:00:20,381 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 09:00:20,383 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 380 msec 2023-06-03 09:00:20,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-03 09:00:20,405 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-03 09:00:20,405 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:20,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-03 09:00:20,431 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 09:00:20,436 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-06-03 09:00:20,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-03 09:00:20,451 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 09:00:20,455 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-03 09:00:20,467 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-03 09:00:20,470 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-03 09:00:20,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.985sec 2023-06-03 09:00:20,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-03 09:00:20,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-03 09:00:20,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-03 09:00:20,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1685782819425-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-03 09:00:20,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1685782819425-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-03 09:00:20,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-03 09:00:20,480 DEBUG [Listener at localhost/35195] zookeeper.ReadOnlyZKClient(139): Connect 0x1c75d30f to 127.0.0.1:64598 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 09:00:20,485 DEBUG [Listener at localhost/35195] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2853cf88, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 09:00:20,487 DEBUG [hconnection-0x56d5522-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 09:00:20,491 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 09:00:20,492 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:00:20,493 INFO [Listener at localhost/35195] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:00:20,496 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-03 09:00:20,496 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:00:20,497 INFO [Listener at localhost/35195] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-03 09:00:20,499 DEBUG [Listener at localhost/35195] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-03 09:00:20,501 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39170, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-03 09:00:20,502 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-03 09:00:20,502 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-03 09:00:20,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 09:00:20,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-03 09:00:20,507 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 09:00:20,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-03 09:00:20,508 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 09:00:20,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 09:00:20,510 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,510 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2 empty. 2023-06-03 09:00:20,511 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,511 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-03 09:00:20,523 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-03 09:00:20,524 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9f1b994ea463b48490327c6293d524e2, NAME => 'TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/.tmp 2023-06-03 09:00:20,532 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:20,532 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 9f1b994ea463b48490327c6293d524e2, disabling compactions & flushes 2023-06-03 09:00:20,532 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,532 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,532 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. after waiting 0 ms 2023-06-03 09:00:20,532 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,532 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,533 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:20,534 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 09:00:20,535 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685782820535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782820535"}]},"ts":"1685782820535"} 2023-06-03 09:00:20,537 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 09:00:20,538 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 09:00:20,538 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782820538"}]},"ts":"1685782820538"} 2023-06-03 09:00:20,539 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-03 09:00:20,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, ASSIGN}] 2023-06-03 09:00:20,543 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, ASSIGN 2023-06-03 09:00:20,544 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37577,1685782819466; forceNewPlan=false, retain=false 2023-06-03 09:00:20,695 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=9f1b994ea463b48490327c6293d524e2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:20,695 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685782820695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782820695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782820695"}]},"ts":"1685782820695"} 2023-06-03 09:00:20,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466}] 2023-06-03 09:00:20,852 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9f1b994ea463b48490327c6293d524e2, NAME => 'TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:00:20,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:20,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,854 INFO [StoreOpener-9f1b994ea463b48490327c6293d524e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,855 DEBUG [StoreOpener-9f1b994ea463b48490327c6293d524e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info 2023-06-03 09:00:20,855 DEBUG [StoreOpener-9f1b994ea463b48490327c6293d524e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info 2023-06-03 09:00:20,856 INFO [StoreOpener-9f1b994ea463b48490327c6293d524e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9f1b994ea463b48490327c6293d524e2 columnFamilyName info 2023-06-03 09:00:20,857 INFO [StoreOpener-9f1b994ea463b48490327c6293d524e2-1] regionserver.HStore(310): Store=9f1b994ea463b48490327c6293d524e2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:20,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:20,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:00:20,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9f1b994ea463b48490327c6293d524e2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=716043, jitterRate=-0.08950431644916534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:00:20,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:20,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2., pid=11, masterSystemTime=1685782820849 2023-06-03 09:00:20,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:20,865 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=9f1b994ea463b48490327c6293d524e2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:20,866 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685782820865"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782820865"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782820865"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782820865"}]},"ts":"1685782820865"} 2023-06-03 09:00:20,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-03 09:00:20,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466 in 171 msec 2023-06-03 09:00:20,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-03 09:00:20,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, ASSIGN in 328 msec 2023-06-03 09:00:20,872 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 09:00:20,873 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782820872"}]},"ts":"1685782820872"} 2023-06-03 09:00:20,874 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-03 09:00:20,876 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 09:00:20,877 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 373 msec 2023-06-03 09:00:23,650 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-03 09:00:25,710 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-03 09:00:25,711 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-03 09:00:25,711 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-03 09:00:30,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-03 09:00:30,510 INFO [Listener at localhost/35195] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-03 09:00:30,512 DEBUG [Listener at localhost/35195] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-03 09:00:30,512 DEBUG [Listener at localhost/35195] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:30,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:30,524 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:00:30,535 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/713af2decba04cbc80e58d80b04dfa83 2023-06-03 09:00:30,543 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/713af2decba04cbc80e58d80b04dfa83 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/713af2decba04cbc80e58d80b04dfa83 2023-06-03 09:00:30,554 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-03 09:00:30,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] ipc.CallRunner(144): callId: 38 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:43756 deadline: 1685782840553, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:30,562 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/713af2decba04cbc80e58d80b04dfa83, entries=7, sequenceid=11, filesize=12.1 K 2023-06-03 09:00:30,563 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 9f1b994ea463b48490327c6293d524e2 in 39ms, sequenceid=11, compaction requested=false 2023-06-03 09:00:30,563 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:40,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:40,586 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-03 09:00:40,600 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/4ee61ac412074d84891e643e43277cb0 2023-06-03 09:00:40,606 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/4ee61ac412074d84891e643e43277cb0 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0 2023-06-03 09:00:40,615 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0, entries=23, sequenceid=37, filesize=29.0 K 2023-06-03 09:00:40,616 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=2.10 KB/2152 for 9f1b994ea463b48490327c6293d524e2 in 30ms, sequenceid=37, compaction requested=false 2023-06-03 09:00:40,616 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:40,616 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=41.1 K, sizeToCheck=16.0 K 2023-06-03 09:00:40,616 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:40,616 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0 because midkey is the same as first or last row 2023-06-03 09:00:42,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:42,599 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:00:42,611 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=47 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/c81cff8e927e499d8241b161d0734e13 2023-06-03 09:00:42,618 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/c81cff8e927e499d8241b161d0734e13 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c81cff8e927e499d8241b161d0734e13 2023-06-03 09:00:42,623 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c81cff8e927e499d8241b161d0734e13, entries=7, sequenceid=47, filesize=12.1 K 2023-06-03 09:00:42,624 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 9f1b994ea463b48490327c6293d524e2 in 25ms, sequenceid=47, compaction requested=true 2023-06-03 09:00:42,624 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:42,624 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=53.2 K, sizeToCheck=16.0 K 2023-06-03 09:00:42,624 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:42,624 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0 because midkey is the same as first or last row 2023-06-03 09:00:42,625 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:42,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:42,625 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:00:42,625 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-03 09:00:42,627 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 54449 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:00:42,627 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9f1b994ea463b48490327c6293d524e2/info is initiating minor compaction (all files) 2023-06-03 09:00:42,627 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9f1b994ea463b48490327c6293d524e2/info in TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:42,628 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/713af2decba04cbc80e58d80b04dfa83, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c81cff8e927e499d8241b161d0734e13] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp, totalSize=53.2 K 2023-06-03 09:00:42,628 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 713af2decba04cbc80e58d80b04dfa83, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685782830515 2023-06-03 09:00:42,629 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 4ee61ac412074d84891e643e43277cb0, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=37, earliestPutTs=1685782830525 2023-06-03 09:00:42,629 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting c81cff8e927e499d8241b161d0734e13, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1685782840587 2023-06-03 09:00:42,650 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=69 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/c7fd0d2fa22a450b940e6238b046088a 2023-06-03 09:00:42,655 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9f1b994ea463b48490327c6293d524e2#info#compaction#29 average throughput is 18.98 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:00:42,664 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/c7fd0d2fa22a450b940e6238b046088a as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c7fd0d2fa22a450b940e6238b046088a 2023-06-03 09:00:42,672 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c7fd0d2fa22a450b940e6238b046088a, entries=19, sequenceid=69, filesize=24.7 K 2023-06-03 09:00:42,673 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=8.41 KB/8608 for 9f1b994ea463b48490327c6293d524e2 in 48ms, sequenceid=69, compaction requested=false 2023-06-03 09:00:42,673 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:42,673 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=77.9 K, sizeToCheck=16.0 K 2023-06-03 09:00:42,673 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:42,673 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0 because midkey is the same as first or last row 2023-06-03 09:00:42,674 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/a1172d45334c4ee585bc0022b6757d1c as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c 2023-06-03 09:00:42,679 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9f1b994ea463b48490327c6293d524e2/info of 9f1b994ea463b48490327c6293d524e2 into a1172d45334c4ee585bc0022b6757d1c(size=43.8 K), total size for store is 68.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:00:42,679 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:42,679 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2., storeName=9f1b994ea463b48490327c6293d524e2/info, priority=13, startTime=1685782842625; duration=0sec 2023-06-03 09:00:42,680 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=68.6 K, sizeToCheck=16.0 K 2023-06-03 09:00:42,680 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:42,680 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c because midkey is the same as first or last row 2023-06-03 09:00:42,680 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:44,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,646 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-03 09:00:44,656 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=82 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/d4c7957cba264ee887a4af85a349e951 2023-06-03 09:00:44,662 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/d4c7957cba264ee887a4af85a349e951 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/d4c7957cba264ee887a4af85a349e951 2023-06-03 09:00:44,667 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/d4c7957cba264ee887a4af85a349e951, entries=9, sequenceid=82, filesize=14.2 K 2023-06-03 09:00:44,668 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=18.91 KB/19368 for 9f1b994ea463b48490327c6293d524e2 in 22ms, sequenceid=82, compaction requested=true 2023-06-03 09:00:44,668 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:44,668 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=82.8 K, sizeToCheck=16.0 K 2023-06-03 09:00:44,668 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:44,668 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c because midkey is the same as first or last row 2023-06-03 09:00:44,669 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:44,669 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:00:44,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,669 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-03 09:00:44,670 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 84764 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:00:44,670 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9f1b994ea463b48490327c6293d524e2/info is initiating minor compaction (all files) 2023-06-03 09:00:44,670 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9f1b994ea463b48490327c6293d524e2/info in TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:44,670 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c7fd0d2fa22a450b940e6238b046088a, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/d4c7957cba264ee887a4af85a349e951] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp, totalSize=82.8 K 2023-06-03 09:00:44,671 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting a1172d45334c4ee585bc0022b6757d1c, keycount=37, bloomtype=ROW, size=43.8 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1685782830515 2023-06-03 09:00:44,671 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting c7fd0d2fa22a450b940e6238b046088a, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=69, earliestPutTs=1685782842599 2023-06-03 09:00:44,672 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting d4c7957cba264ee887a4af85a349e951, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685782842626 2023-06-03 09:00:44,682 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=104 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/710d26fe14c64756aa90c2d967b65adf 2023-06-03 09:00:44,685 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9f1b994ea463b48490327c6293d524e2#info#compaction#32 average throughput is 33.35 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:00:44,689 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/710d26fe14c64756aa90c2d967b65adf as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/710d26fe14c64756aa90c2d967b65adf 2023-06-03 09:00:44,690 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-03 09:00:44,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] ipc.CallRunner(144): callId: 105 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:43756 deadline: 1685782854690, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:44,697 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/710d26fe14c64756aa90c2d967b65adf, entries=19, sequenceid=104, filesize=24.7 K 2023-06-03 09:00:44,698 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 9f1b994ea463b48490327c6293d524e2 in 29ms, sequenceid=104, compaction requested=false 2023-06-03 09:00:44,698 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:44,698 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=107.5 K, sizeToCheck=16.0 K 2023-06-03 09:00:44,698 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:44,699 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c because midkey is the same as first or last row 2023-06-03 09:00:44,702 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/4609d040a6994428aae42f0ebc7f092c as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c 2023-06-03 09:00:44,707 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9f1b994ea463b48490327c6293d524e2/info of 9f1b994ea463b48490327c6293d524e2 into 4609d040a6994428aae42f0ebc7f092c(size=73.5 K), total size for store is 98.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:00:44,707 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:44,707 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2., storeName=9f1b994ea463b48490327c6293d524e2/info, priority=13, startTime=1685782844668; duration=0sec 2023-06-03 09:00:44,708 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=98.3 K, sizeToCheck=16.0 K 2023-06-03 09:00:44,708 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-03 09:00:44,708 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:44,709 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:44,710 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,37577,1685782819466, parent={ENCODED => 9f1b994ea463b48490327c6293d524e2, NAME => 'TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-03 09:00:44,716 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:44,723 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=9f1b994ea463b48490327c6293d524e2, daughterA=01defe61a22aefced1759181bf235f6b, daughterB=9040292e525670edb46e9f2772289667 2023-06-03 09:00:44,723 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=9f1b994ea463b48490327c6293d524e2, daughterA=01defe61a22aefced1759181bf235f6b, daughterB=9040292e525670edb46e9f2772289667 2023-06-03 09:00:44,724 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=9f1b994ea463b48490327c6293d524e2, daughterA=01defe61a22aefced1759181bf235f6b, daughterB=9040292e525670edb46e9f2772289667 2023-06-03 09:00:44,724 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=9f1b994ea463b48490327c6293d524e2, daughterA=01defe61a22aefced1759181bf235f6b, daughterB=9040292e525670edb46e9f2772289667 2023-06-03 09:00:44,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, UNASSIGN}] 2023-06-03 09:00:44,733 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, UNASSIGN 2023-06-03 09:00:44,734 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=9f1b994ea463b48490327c6293d524e2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:44,734 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685782844734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782844734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782844734"}]},"ts":"1685782844734"} 2023-06-03 09:00:44,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466}] 2023-06-03 09:00:44,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9f1b994ea463b48490327c6293d524e2, disabling compactions & flushes 2023-06-03 09:00:44,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:44,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:44,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. after waiting 0 ms 2023-06-03 09:00:44,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:44,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9f1b994ea463b48490327c6293d524e2 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-03 09:00:44,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=118 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/de2f9c6df62547f08036fd7812d633f1 2023-06-03 09:00:44,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.tmp/info/de2f9c6df62547f08036fd7812d633f1 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/de2f9c6df62547f08036fd7812d633f1 2023-06-03 09:00:44,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/de2f9c6df62547f08036fd7812d633f1, entries=10, sequenceid=118, filesize=15.3 K 2023-06-03 09:00:44,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for 9f1b994ea463b48490327c6293d524e2 in 22ms, sequenceid=118, compaction requested=true 2023-06-03 09:00:44,922 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/713af2decba04cbc80e58d80b04dfa83, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c81cff8e927e499d8241b161d0734e13, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c7fd0d2fa22a450b940e6238b046088a, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/d4c7957cba264ee887a4af85a349e951] to archive 2023-06-03 09:00:44,923 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-03 09:00:44,925 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/713af2decba04cbc80e58d80b04dfa83 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/713af2decba04cbc80e58d80b04dfa83 2023-06-03 09:00:44,926 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4ee61ac412074d84891e643e43277cb0 2023-06-03 09:00:44,927 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/a1172d45334c4ee585bc0022b6757d1c 2023-06-03 09:00:44,929 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c81cff8e927e499d8241b161d0734e13 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c81cff8e927e499d8241b161d0734e13 2023-06-03 09:00:44,930 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c7fd0d2fa22a450b940e6238b046088a to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/c7fd0d2fa22a450b940e6238b046088a 2023-06-03 09:00:44,931 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/d4c7957cba264ee887a4af85a349e951 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/d4c7957cba264ee887a4af85a349e951 2023-06-03 09:00:44,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=1 2023-06-03 09:00:44,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. 2023-06-03 09:00:44,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9f1b994ea463b48490327c6293d524e2: 2023-06-03 09:00:44,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,942 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=9f1b994ea463b48490327c6293d524e2, regionState=CLOSED 2023-06-03 09:00:44,942 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685782844941"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782844941"}]},"ts":"1685782844941"} 2023-06-03 09:00:44,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-03 09:00:44,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 9f1b994ea463b48490327c6293d524e2, server=jenkins-hbase4.apache.org,37577,1685782819466 in 208 msec 2023-06-03 09:00:44,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-03 09:00:44,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9f1b994ea463b48490327c6293d524e2, UNASSIGN in 215 msec 2023-06-03 09:00:44,963 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=9f1b994ea463b48490327c6293d524e2, threads=3 2023-06-03 09:00:44,964 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c for region: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,964 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/710d26fe14c64756aa90c2d967b65adf for region: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,964 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/de2f9c6df62547f08036fd7812d633f1 for region: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,976 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/710d26fe14c64756aa90c2d967b65adf, top=true 2023-06-03 09:00:44,976 DEBUG [StoreFileSplitter-pool-2] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/de2f9c6df62547f08036fd7812d633f1, top=true 2023-06-03 09:00:44,982 INFO [StoreFileSplitter-pool-2] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.splits/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1 for child: 9040292e525670edb46e9f2772289667, parent: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,982 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/de2f9c6df62547f08036fd7812d633f1 for region: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,992 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/.splits/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf for child: 9040292e525670edb46e9f2772289667, parent: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:44,992 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/710d26fe14c64756aa90c2d967b65adf for region: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:45,011 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c for region: 9f1b994ea463b48490327c6293d524e2 2023-06-03 09:00:45,011 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 9f1b994ea463b48490327c6293d524e2 Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-06-03 09:00:45,046 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-06-03 09:00:45,048 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-06-03 09:00:45,050 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685782845050"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685782845050"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685782845050"}]},"ts":"1685782845050"} 2023-06-03 09:00:45,050 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685782845050"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782845050"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782845050"}]},"ts":"1685782845050"} 2023-06-03 09:00:45,051 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685782845050"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782845050"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782845050"}]},"ts":"1685782845050"} 2023-06-03 09:00:45,094 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37577] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-03 09:00:45,094 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-03 09:00:45,094 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-03 09:00:45,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=01defe61a22aefced1759181bf235f6b, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9040292e525670edb46e9f2772289667, ASSIGN}] 2023-06-03 09:00:45,105 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9040292e525670edb46e9f2772289667, ASSIGN 2023-06-03 09:00:45,105 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=01defe61a22aefced1759181bf235f6b, ASSIGN 2023-06-03 09:00:45,105 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9040292e525670edb46e9f2772289667, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,37577,1685782819466; forceNewPlan=false, retain=false 2023-06-03 09:00:45,106 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=01defe61a22aefced1759181bf235f6b, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,37577,1685782819466; forceNewPlan=false, retain=false 2023-06-03 09:00:45,111 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/.tmp/info/ec316d84e6614cb09a0414c85c99ec2f 2023-06-03 09:00:45,129 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/.tmp/table/6c2c331c85984309bd0e7b1ebedb8f5a 2023-06-03 09:00:45,134 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/.tmp/info/ec316d84e6614cb09a0414c85c99ec2f as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info/ec316d84e6614cb09a0414c85c99ec2f 2023-06-03 09:00:45,139 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info/ec316d84e6614cb09a0414c85c99ec2f, entries=29, sequenceid=17, filesize=8.6 K 2023-06-03 09:00:45,140 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/.tmp/table/6c2c331c85984309bd0e7b1ebedb8f5a as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/table/6c2c331c85984309bd0e7b1ebedb8f5a 2023-06-03 09:00:45,144 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/table/6c2c331c85984309bd0e7b1ebedb8f5a, entries=4, sequenceid=17, filesize=4.8 K 2023-06-03 09:00:45,145 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 51ms, sequenceid=17, compaction requested=false 2023-06-03 09:00:45,146 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-03 09:00:45,257 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=9040292e525670edb46e9f2772289667, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:45,257 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=01defe61a22aefced1759181bf235f6b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:45,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685782845257"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782845257"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782845257"}]},"ts":"1685782845257"} 2023-06-03 09:00:45,257 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685782845257"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782845257"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782845257"}]},"ts":"1685782845257"} 2023-06-03 09:00:45,259 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 9040292e525670edb46e9f2772289667, server=jenkins-hbase4.apache.org,37577,1685782819466}] 2023-06-03 09:00:45,260 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 01defe61a22aefced1759181bf235f6b, server=jenkins-hbase4.apache.org,37577,1685782819466}] 2023-06-03 09:00:45,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:00:45,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 01defe61a22aefced1759181bf235f6b, NAME => 'TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-03 09:00:45,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:45,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,416 INFO [StoreOpener-01defe61a22aefced1759181bf235f6b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,416 DEBUG [StoreOpener-01defe61a22aefced1759181bf235f6b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info 2023-06-03 09:00:45,416 DEBUG [StoreOpener-01defe61a22aefced1759181bf235f6b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info 2023-06-03 09:00:45,417 INFO [StoreOpener-01defe61a22aefced1759181bf235f6b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 01defe61a22aefced1759181bf235f6b columnFamilyName info 2023-06-03 09:00:45,428 DEBUG [StoreOpener-01defe61a22aefced1759181bf235f6b-1] regionserver.HStore(539): loaded hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2->hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c-bottom 2023-06-03 09:00:45,429 INFO [StoreOpener-01defe61a22aefced1759181bf235f6b-1] regionserver.HStore(310): Store=01defe61a22aefced1759181bf235f6b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:45,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 01defe61a22aefced1759181bf235f6b 2023-06-03 09:00:45,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 01defe61a22aefced1759181bf235f6b; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=718026, jitterRate=-0.08698396384716034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:00:45,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 01defe61a22aefced1759181bf235f6b: 2023-06-03 09:00:45,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b., pid=18, masterSystemTime=1685782845411 2023-06-03 09:00:45,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:45,436 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-03 09:00:45,436 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:00:45,436 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 01defe61a22aefced1759181bf235f6b/info is initiating minor compaction (all files) 2023-06-03 09:00:45,436 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 01defe61a22aefced1759181bf235f6b/info in TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:00:45,437 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2->hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c-bottom] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/.tmp, totalSize=73.5 K 2023-06-03 09:00:45,437 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685782830515 2023-06-03 09:00:45,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:00:45,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:00:45,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:00:45,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9040292e525670edb46e9f2772289667, NAME => 'TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-03 09:00:45,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:00:45,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,438 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=01defe61a22aefced1759181bf235f6b, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:45,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,438 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685782845438"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782845438"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782845438"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782845438"}]},"ts":"1685782845438"} 2023-06-03 09:00:45,439 INFO [StoreOpener-9040292e525670edb46e9f2772289667-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,440 DEBUG [StoreOpener-9040292e525670edb46e9f2772289667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info 2023-06-03 09:00:45,440 DEBUG [StoreOpener-9040292e525670edb46e9f2772289667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info 2023-06-03 09:00:45,441 INFO [StoreOpener-9040292e525670edb46e9f2772289667-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9040292e525670edb46e9f2772289667 columnFamilyName info 2023-06-03 09:00:45,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-06-03 09:00:45,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 01defe61a22aefced1759181bf235f6b, server=jenkins-hbase4.apache.org,37577,1685782819466 in 181 msec 2023-06-03 09:00:45,443 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=01defe61a22aefced1759181bf235f6b, ASSIGN in 339 msec 2023-06-03 09:00:45,444 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 01defe61a22aefced1759181bf235f6b#info#compaction#36 average throughput is 31.30 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:00:45,449 DEBUG [StoreOpener-9040292e525670edb46e9f2772289667-1] regionserver.HStore(539): loaded hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2->hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c-top 2023-06-03 09:00:45,455 DEBUG [StoreOpener-9040292e525670edb46e9f2772289667-1] regionserver.HStore(539): loaded hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf 2023-06-03 09:00:45,460 DEBUG [StoreOpener-9040292e525670edb46e9f2772289667-1] regionserver.HStore(539): loaded hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1 2023-06-03 09:00:45,460 INFO [StoreOpener-9040292e525670edb46e9f2772289667-1] regionserver.HStore(310): Store=9040292e525670edb46e9f2772289667/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:00:45,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,462 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/.tmp/info/23e7ec5e7fd34835b8d34b1efa517286 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info/23e7ec5e7fd34835b8d34b1efa517286 2023-06-03 09:00:45,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9040292e525670edb46e9f2772289667 2023-06-03 09:00:45,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9040292e525670edb46e9f2772289667; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=806139, jitterRate=0.025058984756469727}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:00:45,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:00:45,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., pid=17, masterSystemTime=1685782845411 2023-06-03 09:00:45,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:45,469 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:00:45,471 INFO [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:00:45,471 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:00:45,471 INFO [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:00:45,471 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 01defe61a22aefced1759181bf235f6b/info of 01defe61a22aefced1759181bf235f6b into 23e7ec5e7fd34835b8d34b1efa517286(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:00:45,471 INFO [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2->hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c-top, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=113.5 K 2023-06-03 09:00:45,471 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 01defe61a22aefced1759181bf235f6b: 2023-06-03 09:00:45,471 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b., storeName=01defe61a22aefced1759181bf235f6b/info, priority=15, startTime=1685782845435; duration=0sec 2023-06-03 09:00:45,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:00:45,471 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:45,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:00:45,472 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] compactions.Compactor(207): Compacting 4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1685782830515 2023-06-03 09:00:45,472 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=9040292e525670edb46e9f2772289667, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:00:45,472 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685782845472"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782845472"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782845472"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782845472"}]},"ts":"1685782845472"} 2023-06-03 09:00:45,472 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=104, earliestPutTs=1685782844647 2023-06-03 09:00:45,473 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685782844669 2023-06-03 09:00:45,476 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-06-03 09:00:45,476 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 9040292e525670edb46e9f2772289667, server=jenkins-hbase4.apache.org,37577,1685782819466 in 215 msec 2023-06-03 09:00:45,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-06-03 09:00:45,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=9040292e525670edb46e9f2772289667, ASSIGN in 373 msec 2023-06-03 09:00:45,479 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=9f1b994ea463b48490327c6293d524e2, daughterA=01defe61a22aefced1759181bf235f6b, daughterB=9040292e525670edb46e9f2772289667 in 761 msec 2023-06-03 09:00:45,482 INFO [RS:0;jenkins-hbase4:37577-longCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#37 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:00:45,494 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/6a4a678f79ce4805a095fe63b4e58b3d as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6a4a678f79ce4805a095fe63b4e58b3d 2023-06-03 09:00:45,500 INFO [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into 6a4a678f79ce4805a095fe63b4e58b3d(size=39.8 K), total size for store is 39.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:00:45,500 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:00:45,500 INFO [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782845467; duration=0sec 2023-06-03 09:00:45,500 DEBUG [RS:0;jenkins-hbase4:37577-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:00:50,502 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-03 09:00:54,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] ipc.CallRunner(144): callId: 107 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:43756 deadline: 1685782864739, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685782820502.9f1b994ea463b48490327c6293d524e2. is not online on jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:05,771 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=3, created chunk count=13, reused chunk count=29, reuseRatio=69.05% 2023-06-03 09:01:05,771 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-03 09:01:12,755 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-03 09:01:16,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:16,775 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:01:16,800 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/ca92a972cf2e40eeb76486f960d1b51a 2023-06-03 09:01:16,807 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/ca92a972cf2e40eeb76486f960d1b51a as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ca92a972cf2e40eeb76486f960d1b51a 2023-06-03 09:01:16,812 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ca92a972cf2e40eeb76486f960d1b51a, entries=7, sequenceid=132, filesize=12.1 K 2023-06-03 09:01:16,813 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for 9040292e525670edb46e9f2772289667 in 38ms, sequenceid=132, compaction requested=false 2023-06-03 09:01:16,813 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:16,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:16,814 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-03 09:01:16,836 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=157 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/895bcf2d2f8642428824479014264939 2023-06-03 09:01:16,844 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/895bcf2d2f8642428824479014264939 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/895bcf2d2f8642428824479014264939 2023-06-03 09:01:16,848 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/895bcf2d2f8642428824479014264939, entries=22, sequenceid=157, filesize=27.9 K 2023-06-03 09:01:16,849 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=5.25 KB/5380 for 9040292e525670edb46e9f2772289667 in 35ms, sequenceid=157, compaction requested=true 2023-06-03 09:01:16,849 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:16,849 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:16,849 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:01:16,850 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 81719 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:01:16,851 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:01:16,851 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:16,851 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6a4a678f79ce4805a095fe63b4e58b3d, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ca92a972cf2e40eeb76486f960d1b51a, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/895bcf2d2f8642428824479014264939] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=79.8 K 2023-06-03 09:01:16,851 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 6a4a678f79ce4805a095fe63b4e58b3d, keycount=33, bloomtype=ROW, size=39.8 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685782842638 2023-06-03 09:01:16,851 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting ca92a972cf2e40eeb76486f960d1b51a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1685782874767 2023-06-03 09:01:16,852 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 895bcf2d2f8642428824479014264939, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=157, earliestPutTs=1685782876776 2023-06-03 09:01:16,863 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#40 average throughput is 31.81 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:01:16,882 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/53c410c8df11409985c96a6ab39ba1b1 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/53c410c8df11409985c96a6ab39ba1b1 2023-06-03 09:01:16,888 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into 53c410c8df11409985c96a6ab39ba1b1(size=70.5 K), total size for store is 70.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:01:16,888 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:16,888 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782876849; duration=0sec 2023-06-03 09:01:16,888 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:18,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:18,826 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:01:18,835 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=168 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/a7d45041ffd5402282a72d63a0cfb64b 2023-06-03 09:01:18,842 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/a7d45041ffd5402282a72d63a0cfb64b as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a7d45041ffd5402282a72d63a0cfb64b 2023-06-03 09:01:18,847 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a7d45041ffd5402282a72d63a0cfb64b, entries=7, sequenceid=168, filesize=12.1 K 2023-06-03 09:01:18,848 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 9040292e525670edb46e9f2772289667 in 22ms, sequenceid=168, compaction requested=false 2023-06-03 09:01:18,848 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:18,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:18,848 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-03 09:01:18,859 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=191 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/20eba4e95ff747f18db4e9da5e355837 2023-06-03 09:01:18,860 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9040292e525670edb46e9f2772289667, server=jenkins-hbase4.apache.org,37577,1685782819466 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-03 09:01:18,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] ipc.CallRunner(144): callId: 176 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:43756 deadline: 1685782888860, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9040292e525670edb46e9f2772289667, server=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:18,864 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/20eba4e95ff747f18db4e9da5e355837 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/20eba4e95ff747f18db4e9da5e355837 2023-06-03 09:01:18,870 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/20eba4e95ff747f18db4e9da5e355837, entries=20, sequenceid=191, filesize=25.8 K 2023-06-03 09:01:18,871 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 9040292e525670edb46e9f2772289667 in 23ms, sequenceid=191, compaction requested=true 2023-06-03 09:01:18,871 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:18,871 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:18,871 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:01:18,872 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 111068 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:01:18,872 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:01:18,872 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:18,872 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/53c410c8df11409985c96a6ab39ba1b1, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a7d45041ffd5402282a72d63a0cfb64b, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/20eba4e95ff747f18db4e9da5e355837] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=108.5 K 2023-06-03 09:01:18,873 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 53c410c8df11409985c96a6ab39ba1b1, keycount=62, bloomtype=ROW, size=70.5 K, encoding=NONE, compression=NONE, seqNum=157, earliestPutTs=1685782842638 2023-06-03 09:01:18,873 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting a7d45041ffd5402282a72d63a0cfb64b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=168, earliestPutTs=1685782876815 2023-06-03 09:01:18,873 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 20eba4e95ff747f18db4e9da5e355837, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=191, earliestPutTs=1685782878826 2023-06-03 09:01:18,883 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#43 average throughput is 45.66 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:01:18,898 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/735437bbe395400789e3ecea55f7e166 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/735437bbe395400789e3ecea55f7e166 2023-06-03 09:01:18,903 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into 735437bbe395400789e3ecea55f7e166(size=99.1 K), total size for store is 99.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:01:18,903 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:18,903 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782878871; duration=0sec 2023-06-03 09:01:18,903 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:28,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:28,883 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-03 09:01:28,891 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=205 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/6533df83ae584af8b9090b3fde050b9b 2023-06-03 09:01:28,897 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/6533df83ae584af8b9090b3fde050b9b as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6533df83ae584af8b9090b3fde050b9b 2023-06-03 09:01:28,902 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6533df83ae584af8b9090b3fde050b9b, entries=10, sequenceid=205, filesize=15.3 K 2023-06-03 09:01:28,903 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for 9040292e525670edb46e9f2772289667 in 20ms, sequenceid=205, compaction requested=false 2023-06-03 09:01:28,903 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:30,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:30,891 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:01:30,901 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=215 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/3ca1c3cb24984815a4837a2250b0f838 2023-06-03 09:01:30,907 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/3ca1c3cb24984815a4837a2250b0f838 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/3ca1c3cb24984815a4837a2250b0f838 2023-06-03 09:01:30,912 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/3ca1c3cb24984815a4837a2250b0f838, entries=7, sequenceid=215, filesize=12.1 K 2023-06-03 09:01:30,913 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 9040292e525670edb46e9f2772289667 in 22ms, sequenceid=215, compaction requested=true 2023-06-03 09:01:30,913 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:30,914 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:30,914 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:01:30,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:30,914 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-03 09:01:30,915 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 129480 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:01:30,915 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:01:30,915 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:30,915 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/735437bbe395400789e3ecea55f7e166, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6533df83ae584af8b9090b3fde050b9b, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/3ca1c3cb24984815a4837a2250b0f838] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=126.4 K 2023-06-03 09:01:30,916 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 735437bbe395400789e3ecea55f7e166, keycount=89, bloomtype=ROW, size=99.1 K, encoding=NONE, compression=NONE, seqNum=191, earliestPutTs=1685782842638 2023-06-03 09:01:30,916 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 6533df83ae584af8b9090b3fde050b9b, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=205, earliestPutTs=1685782878849 2023-06-03 09:01:30,917 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 3ca1c3cb24984815a4837a2250b0f838, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=215, earliestPutTs=1685782890884 2023-06-03 09:01:30,933 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#47 average throughput is 108.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:01:30,934 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=238 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/cfc2c950f813463f899b70751aaacc3d 2023-06-03 09:01:30,941 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/cfc2c950f813463f899b70751aaacc3d as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/cfc2c950f813463f899b70751aaacc3d 2023-06-03 09:01:30,946 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/cfc2c950f813463f899b70751aaacc3d, entries=20, sequenceid=238, filesize=25.8 K 2023-06-03 09:01:30,947 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 9040292e525670edb46e9f2772289667 in 33ms, sequenceid=238, compaction requested=false 2023-06-03 09:01:30,947 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:30,947 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/92ed5560e78942abb6c67fa4b1a538a9 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/92ed5560e78942abb6c67fa4b1a538a9 2023-06-03 09:01:30,953 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into 92ed5560e78942abb6c67fa4b1a538a9(size=117.1 K), total size for store is 142.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:01:30,953 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:30,953 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782890914; duration=0sec 2023-06-03 09:01:30,953 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:32,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:32,924 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:01:32,932 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=249 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/be67605993c244e886dbec56aaa0141f 2023-06-03 09:01:32,938 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/be67605993c244e886dbec56aaa0141f as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/be67605993c244e886dbec56aaa0141f 2023-06-03 09:01:32,943 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/be67605993c244e886dbec56aaa0141f, entries=7, sequenceid=249, filesize=12.1 K 2023-06-03 09:01:32,944 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 9040292e525670edb46e9f2772289667 in 20ms, sequenceid=249, compaction requested=true 2023-06-03 09:01:32,944 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:32,944 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:32,944 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:01:32,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:32,945 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-03 09:01:32,945 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 158712 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:01:32,946 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:01:32,946 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:32,946 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/92ed5560e78942abb6c67fa4b1a538a9, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/cfc2c950f813463f899b70751aaacc3d, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/be67605993c244e886dbec56aaa0141f] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=155.0 K 2023-06-03 09:01:32,946 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 92ed5560e78942abb6c67fa4b1a538a9, keycount=106, bloomtype=ROW, size=117.1 K, encoding=NONE, compression=NONE, seqNum=215, earliestPutTs=1685782842638 2023-06-03 09:01:32,947 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting cfc2c950f813463f899b70751aaacc3d, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=238, earliestPutTs=1685782890892 2023-06-03 09:01:32,947 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting be67605993c244e886dbec56aaa0141f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=249, earliestPutTs=1685782890915 2023-06-03 09:01:32,956 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9040292e525670edb46e9f2772289667, server=jenkins-hbase4.apache.org,37577,1685782819466 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-03 09:01:32,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] ipc.CallRunner(144): callId: 242 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:43756 deadline: 1685782902956, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=9040292e525670edb46e9f2772289667, server=jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:32,962 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=272 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/352fbf8882f04b97a9c4706df0d27467 2023-06-03 09:01:32,965 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#50 average throughput is 68.24 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:01:32,967 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/352fbf8882f04b97a9c4706df0d27467 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/352fbf8882f04b97a9c4706df0d27467 2023-06-03 09:01:32,976 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/352fbf8882f04b97a9c4706df0d27467, entries=20, sequenceid=272, filesize=25.8 K 2023-06-03 09:01:32,977 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 9040292e525670edb46e9f2772289667 in 32ms, sequenceid=272, compaction requested=false 2023-06-03 09:01:32,977 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:32,978 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/35baa9d6b9f1419abfaad9efaeaf0af5 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/35baa9d6b9f1419abfaad9efaeaf0af5 2023-06-03 09:01:32,983 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into 35baa9d6b9f1419abfaad9efaeaf0af5(size=145.8 K), total size for store is 171.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:01:32,983 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:32,983 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782892944; duration=0sec 2023-06-03 09:01:32,984 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:42,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:42,992 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-03 09:01:43,004 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=286 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/9c1398bdd58248a98b84d3a9b0746f3a 2023-06-03 09:01:43,010 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/9c1398bdd58248a98b84d3a9b0746f3a as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/9c1398bdd58248a98b84d3a9b0746f3a 2023-06-03 09:01:43,015 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/9c1398bdd58248a98b84d3a9b0746f3a, entries=10, sequenceid=286, filesize=15.3 K 2023-06-03 09:01:43,016 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for 9040292e525670edb46e9f2772289667 in 24ms, sequenceid=286, compaction requested=true 2023-06-03 09:01:43,016 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:43,016 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:43,016 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:01:43,017 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 191384 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:01:43,017 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:01:43,017 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:43,018 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/35baa9d6b9f1419abfaad9efaeaf0af5, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/352fbf8882f04b97a9c4706df0d27467, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/9c1398bdd58248a98b84d3a9b0746f3a] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=186.9 K 2023-06-03 09:01:43,018 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 35baa9d6b9f1419abfaad9efaeaf0af5, keycount=133, bloomtype=ROW, size=145.8 K, encoding=NONE, compression=NONE, seqNum=249, earliestPutTs=1685782842638 2023-06-03 09:01:43,018 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 352fbf8882f04b97a9c4706df0d27467, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=272, earliestPutTs=1685782892924 2023-06-03 09:01:43,019 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting 9c1398bdd58248a98b84d3a9b0746f3a, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=286, earliestPutTs=1685782892946 2023-06-03 09:01:43,030 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#52 average throughput is 83.63 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:01:43,047 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/bf75e42e43c14421a0fee0163cc9da9c as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/bf75e42e43c14421a0fee0163cc9da9c 2023-06-03 09:01:43,052 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into bf75e42e43c14421a0fee0163cc9da9c(size=177.5 K), total size for store is 177.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:01:43,052 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:43,052 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782903016; duration=0sec 2023-06-03 09:01:43,053 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:45,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:45,000 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-03 09:01:45,010 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=297 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/d9a1381adbb945baaf2b27144b9eac20 2023-06-03 09:01:45,016 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/d9a1381adbb945baaf2b27144b9eac20 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/d9a1381adbb945baaf2b27144b9eac20 2023-06-03 09:01:45,023 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/d9a1381adbb945baaf2b27144b9eac20, entries=7, sequenceid=297, filesize=12.1 K 2023-06-03 09:01:45,024 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 9040292e525670edb46e9f2772289667 in 24ms, sequenceid=297, compaction requested=false 2023-06-03 09:01:45,024 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:45,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37577] regionserver.HRegion(9158): Flush requested on 9040292e525670edb46e9f2772289667 2023-06-03 09:01:45,025 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-03 09:01:45,037 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=320 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/ad34e0a61e1e436b8d3e08be98d03002 2023-06-03 09:01:45,042 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/ad34e0a61e1e436b8d3e08be98d03002 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ad34e0a61e1e436b8d3e08be98d03002 2023-06-03 09:01:45,047 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ad34e0a61e1e436b8d3e08be98d03002, entries=20, sequenceid=320, filesize=25.8 K 2023-06-03 09:01:45,048 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 9040292e525670edb46e9f2772289667 in 23ms, sequenceid=320, compaction requested=true 2023-06-03 09:01:45,048 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:45,048 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:45,048 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-03 09:01:45,049 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 220627 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-03 09:01:45,050 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1912): 9040292e525670edb46e9f2772289667/info is initiating minor compaction (all files) 2023-06-03 09:01:45,050 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 9040292e525670edb46e9f2772289667/info in TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:45,050 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/bf75e42e43c14421a0fee0163cc9da9c, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/d9a1381adbb945baaf2b27144b9eac20, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ad34e0a61e1e436b8d3e08be98d03002] into tmpdir=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp, totalSize=215.5 K 2023-06-03 09:01:45,050 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting bf75e42e43c14421a0fee0163cc9da9c, keycount=163, bloomtype=ROW, size=177.5 K, encoding=NONE, compression=NONE, seqNum=286, earliestPutTs=1685782842638 2023-06-03 09:01:45,050 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting d9a1381adbb945baaf2b27144b9eac20, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=297, earliestPutTs=1685782904993 2023-06-03 09:01:45,051 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] compactions.Compactor(207): Compacting ad34e0a61e1e436b8d3e08be98d03002, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=320, earliestPutTs=1685782905001 2023-06-03 09:01:45,063 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] throttle.PressureAwareThroughputController(145): 9040292e525670edb46e9f2772289667#info#compaction#55 average throughput is 64.99 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-03 09:01:45,083 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/a998efcd90544818ae64e44612427cb0 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a998efcd90544818ae64e44612427cb0 2023-06-03 09:01:45,088 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 9040292e525670edb46e9f2772289667/info of 9040292e525670edb46e9f2772289667 into a998efcd90544818ae64e44612427cb0(size=206.1 K), total size for store is 206.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-03 09:01:45,088 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:45,088 INFO [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667., storeName=9040292e525670edb46e9f2772289667/info, priority=13, startTime=1685782905048; duration=0sec 2023-06-03 09:01:45,088 DEBUG [RS:0;jenkins-hbase4:37577-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-03 09:01:47,031 INFO [Listener at localhost/35195] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-03 09:01:47,047 INFO [Listener at localhost/35195] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782819842 with entries=311, filesize=307.65 KB; new WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782907031 2023-06-03 09:01:47,047 DEBUG [Listener at localhost/35195] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43135,DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7,DISK]] 2023-06-03 09:01:47,047 DEBUG [Listener at localhost/35195] wal.AbstractFSWAL(716): hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782819842 is not closed yet, will try archiving it next time 2023-06-03 09:01:47,052 DEBUG [Listener at localhost/35195] regionserver.HRegion(2446): Flush status journal for 01defe61a22aefced1759181bf235f6b: 2023-06-03 09:01:47,052 INFO [Listener at localhost/35195] regionserver.HRegion(2745): Flushing f0596de6e6fa870e80124c219e871a19 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-03 09:01:47,064 INFO [Listener at localhost/35195] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/.tmp/info/88aa3bf1c15045239086efadb360d044 2023-06-03 09:01:47,069 DEBUG [Listener at localhost/35195] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/.tmp/info/88aa3bf1c15045239086efadb360d044 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/info/88aa3bf1c15045239086efadb360d044 2023-06-03 09:01:47,073 INFO [Listener at localhost/35195] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/info/88aa3bf1c15045239086efadb360d044, entries=2, sequenceid=6, filesize=4.8 K 2023-06-03 09:01:47,074 INFO [Listener at localhost/35195] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for f0596de6e6fa870e80124c219e871a19 in 22ms, sequenceid=6, compaction requested=false 2023-06-03 09:01:47,074 DEBUG [Listener at localhost/35195] regionserver.HRegion(2446): Flush status journal for f0596de6e6fa870e80124c219e871a19: 2023-06-03 09:01:47,075 INFO [Listener at localhost/35195] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-03 09:01:47,085 INFO [Listener at localhost/35195] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/.tmp/info/43121d526d744771ab09b57c76c7951f 2023-06-03 09:01:47,090 DEBUG [Listener at localhost/35195] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/.tmp/info/43121d526d744771ab09b57c76c7951f as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info/43121d526d744771ab09b57c76c7951f 2023-06-03 09:01:47,094 INFO [Listener at localhost/35195] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/info/43121d526d744771ab09b57c76c7951f, entries=16, sequenceid=24, filesize=7.0 K 2023-06-03 09:01:47,095 INFO [Listener at localhost/35195] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 20ms, sequenceid=24, compaction requested=false 2023-06-03 09:01:47,095 DEBUG [Listener at localhost/35195] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-03 09:01:47,095 INFO [Listener at localhost/35195] regionserver.HRegion(2745): Flushing 9040292e525670edb46e9f2772289667 1/1 column families, dataSize=5.25 KB heapSize=5.88 KB 2023-06-03 09:01:47,113 INFO [Listener at localhost/35195] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.25 KB at sequenceid=329 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/8a6471c006ac48469cfdf3820a57f989 2023-06-03 09:01:47,120 DEBUG [Listener at localhost/35195] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/.tmp/info/8a6471c006ac48469cfdf3820a57f989 as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/8a6471c006ac48469cfdf3820a57f989 2023-06-03 09:01:47,125 INFO [Listener at localhost/35195] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/8a6471c006ac48469cfdf3820a57f989, entries=5, sequenceid=329, filesize=10.0 K 2023-06-03 09:01:47,126 INFO [Listener at localhost/35195] regionserver.HRegion(2948): Finished flush of dataSize ~5.25 KB/5380, heapSize ~5.86 KB/6000, currentSize=0 B/0 for 9040292e525670edb46e9f2772289667 in 31ms, sequenceid=329, compaction requested=false 2023-06-03 09:01:47,126 DEBUG [Listener at localhost/35195] regionserver.HRegion(2446): Flush status journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:47,138 INFO [Listener at localhost/35195] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782907031 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782907126 2023-06-03 09:01:47,139 DEBUG [Listener at localhost/35195] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-43dc8c8c-73a0-498c-8a65-f8cf06d2b7a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43135,DS-535b7414-eaf9-4cbc-8aa2-ee03fb87faa7,DISK]] 2023-06-03 09:01:47,139 DEBUG [Listener at localhost/35195] wal.AbstractFSWAL(716): hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782907031 is not closed yet, will try archiving it next time 2023-06-03 09:01:47,139 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782819842 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/oldWALs/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782819842 2023-06-03 09:01:47,140 INFO [Listener at localhost/35195] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-03 09:01:47,144 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782907031 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/oldWALs/jenkins-hbase4.apache.org%2C37577%2C1685782819466.1685782907031 2023-06-03 09:01:47,242 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-03 09:01:47,242 INFO [Listener at localhost/35195] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-03 09:01:47,242 DEBUG [Listener at localhost/35195] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1c75d30f to 127.0.0.1:64598 2023-06-03 09:01:47,243 DEBUG [Listener at localhost/35195] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:47,243 DEBUG [Listener at localhost/35195] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-03 09:01:47,243 DEBUG [Listener at localhost/35195] util.JVMClusterUtil(257): Found active master hash=177511409, stopped=false 2023-06-03 09:01:47,243 INFO [Listener at localhost/35195] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:01:47,245 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 09:01:47,245 INFO [Listener at localhost/35195] procedure2.ProcedureExecutor(629): Stopping 2023-06-03 09:01:47,245 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:47,245 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 09:01:47,245 DEBUG [Listener at localhost/35195] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x10efb0e0 to 127.0.0.1:64598 2023-06-03 09:01:47,246 DEBUG [Listener at localhost/35195] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:47,246 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:01:47,246 INFO [Listener at localhost/35195] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37577,1685782819466' ***** 2023-06-03 09:01:47,246 INFO [Listener at localhost/35195] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 09:01:47,246 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 09:01:47,247 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(3303): Received CLOSE for 01defe61a22aefced1759181bf235f6b 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(3303): Received CLOSE for f0596de6e6fa870e80124c219e871a19 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(3303): Received CLOSE for 9040292e525670edb46e9f2772289667 2023-06-03 09:01:47,247 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:47,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 01defe61a22aefced1759181bf235f6b, disabling compactions & flushes 2023-06-03 09:01:47,247 DEBUG [RS:0;jenkins-hbase4:37577] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65b69b26 to 127.0.0.1:64598 2023-06-03 09:01:47,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:01:47,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:01:47,247 DEBUG [RS:0;jenkins-hbase4:37577] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:47,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. after waiting 0 ms 2023-06-03 09:01:47,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:01:47,248 INFO [RS:0;jenkins-hbase4:37577] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 09:01:47,248 INFO [RS:0;jenkins-hbase4:37577] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 09:01:47,248 INFO [RS:0;jenkins-hbase4:37577] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 09:01:47,248 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 09:01:47,248 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-03 09:01:47,249 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1478): Online Regions={01defe61a22aefced1759181bf235f6b=TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b., f0596de6e6fa870e80124c219e871a19=hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19., 1588230740=hbase:meta,,1.1588230740, 9040292e525670edb46e9f2772289667=TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.} 2023-06-03 09:01:47,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 09:01:47,249 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2->hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c-bottom] to archive 2023-06-03 09:01:47,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 09:01:47,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 09:01:47,249 DEBUG [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1504): Waiting on 01defe61a22aefced1759181bf235f6b, 1588230740, 9040292e525670edb46e9f2772289667, f0596de6e6fa870e80124c219e871a19 2023-06-03 09:01:47,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 09:01:47,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 09:01:47,252 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-03 09:01:47,255 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2 2023-06-03 09:01:47,258 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-03 09:01:47,259 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-03 09:01:47,259 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 09:01:47,259 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 09:01:47,259 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-03 09:01:47,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/01defe61a22aefced1759181bf235f6b/recovered.edits/126.seqid, newMaxSeqId=126, maxSeqId=121 2023-06-03 09:01:47,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:01:47,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 01defe61a22aefced1759181bf235f6b: 2023-06-03 09:01:47,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685782844716.01defe61a22aefced1759181bf235f6b. 2023-06-03 09:01:47,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f0596de6e6fa870e80124c219e871a19, disabling compactions & flushes 2023-06-03 09:01:47,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:01:47,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:01:47,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. after waiting 0 ms 2023-06-03 09:01:47,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:01:47,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/hbase/namespace/f0596de6e6fa870e80124c219e871a19/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-03 09:01:47,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:01:47,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f0596de6e6fa870e80124c219e871a19: 2023-06-03 09:01:47,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685782820001.f0596de6e6fa870e80124c219e871a19. 2023-06-03 09:01:47,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9040292e525670edb46e9f2772289667, disabling compactions & flushes 2023-06-03 09:01:47,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:47,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:47,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. after waiting 0 ms 2023-06-03 09:01:47,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:47,277 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2->hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9f1b994ea463b48490327c6293d524e2/info/4609d040a6994428aae42f0ebc7f092c-top, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6a4a678f79ce4805a095fe63b4e58b3d, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ca92a972cf2e40eeb76486f960d1b51a, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/53c410c8df11409985c96a6ab39ba1b1, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/895bcf2d2f8642428824479014264939, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a7d45041ffd5402282a72d63a0cfb64b, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/735437bbe395400789e3ecea55f7e166, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/20eba4e95ff747f18db4e9da5e355837, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6533df83ae584af8b9090b3fde050b9b, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/92ed5560e78942abb6c67fa4b1a538a9, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/3ca1c3cb24984815a4837a2250b0f838, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/cfc2c950f813463f899b70751aaacc3d, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/35baa9d6b9f1419abfaad9efaeaf0af5, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/be67605993c244e886dbec56aaa0141f, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/352fbf8882f04b97a9c4706df0d27467, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/bf75e42e43c14421a0fee0163cc9da9c, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/9c1398bdd58248a98b84d3a9b0746f3a, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/d9a1381adbb945baaf2b27144b9eac20, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ad34e0a61e1e436b8d3e08be98d03002] to archive 2023-06-03 09:01:47,278 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-03 09:01:47,279 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/4609d040a6994428aae42f0ebc7f092c.9f1b994ea463b48490327c6293d524e2 2023-06-03 09:01:47,281 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-710d26fe14c64756aa90c2d967b65adf 2023-06-03 09:01:47,282 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6a4a678f79ce4805a095fe63b4e58b3d to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6a4a678f79ce4805a095fe63b4e58b3d 2023-06-03 09:01:47,283 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/TestLogRolling-testLogRolling=9f1b994ea463b48490327c6293d524e2-de2f9c6df62547f08036fd7812d633f1 2023-06-03 09:01:47,284 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ca92a972cf2e40eeb76486f960d1b51a to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ca92a972cf2e40eeb76486f960d1b51a 2023-06-03 09:01:47,285 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/53c410c8df11409985c96a6ab39ba1b1 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/53c410c8df11409985c96a6ab39ba1b1 2023-06-03 09:01:47,286 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/895bcf2d2f8642428824479014264939 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/895bcf2d2f8642428824479014264939 2023-06-03 09:01:47,287 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a7d45041ffd5402282a72d63a0cfb64b to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/a7d45041ffd5402282a72d63a0cfb64b 2023-06-03 09:01:47,288 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/735437bbe395400789e3ecea55f7e166 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/735437bbe395400789e3ecea55f7e166 2023-06-03 09:01:47,289 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/20eba4e95ff747f18db4e9da5e355837 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/20eba4e95ff747f18db4e9da5e355837 2023-06-03 09:01:47,290 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6533df83ae584af8b9090b3fde050b9b to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/6533df83ae584af8b9090b3fde050b9b 2023-06-03 09:01:47,291 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/92ed5560e78942abb6c67fa4b1a538a9 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/92ed5560e78942abb6c67fa4b1a538a9 2023-06-03 09:01:47,292 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/3ca1c3cb24984815a4837a2250b0f838 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/3ca1c3cb24984815a4837a2250b0f838 2023-06-03 09:01:47,293 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/cfc2c950f813463f899b70751aaacc3d to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/cfc2c950f813463f899b70751aaacc3d 2023-06-03 09:01:47,295 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/35baa9d6b9f1419abfaad9efaeaf0af5 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/35baa9d6b9f1419abfaad9efaeaf0af5 2023-06-03 09:01:47,296 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/be67605993c244e886dbec56aaa0141f to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/be67605993c244e886dbec56aaa0141f 2023-06-03 09:01:47,297 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/352fbf8882f04b97a9c4706df0d27467 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/352fbf8882f04b97a9c4706df0d27467 2023-06-03 09:01:47,298 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/bf75e42e43c14421a0fee0163cc9da9c to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/bf75e42e43c14421a0fee0163cc9da9c 2023-06-03 09:01:47,299 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/9c1398bdd58248a98b84d3a9b0746f3a to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/9c1398bdd58248a98b84d3a9b0746f3a 2023-06-03 09:01:47,300 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/d9a1381adbb945baaf2b27144b9eac20 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/d9a1381adbb945baaf2b27144b9eac20 2023-06-03 09:01:47,301 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ad34e0a61e1e436b8d3e08be98d03002 to hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/archive/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/info/ad34e0a61e1e436b8d3e08be98d03002 2023-06-03 09:01:47,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/data/default/TestLogRolling-testLogRolling/9040292e525670edb46e9f2772289667/recovered.edits/332.seqid, newMaxSeqId=332, maxSeqId=121 2023-06-03 09:01:47,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:47,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9040292e525670edb46e9f2772289667: 2023-06-03 09:01:47,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685782844716.9040292e525670edb46e9f2772289667. 2023-06-03 09:01:47,449 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37577,1685782819466; all regions closed. 2023-06-03 09:01:47,450 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:47,456 DEBUG [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/oldWALs 2023-06-03 09:01:47,456 INFO [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37577%2C1685782819466.meta:.meta(num 1685782819952) 2023-06-03 09:01:47,456 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/WALs/jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:47,462 DEBUG [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/oldWALs 2023-06-03 09:01:47,462 INFO [RS:0;jenkins-hbase4:37577] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37577%2C1685782819466:(num 1685782907126) 2023-06-03 09:01:47,462 DEBUG [RS:0;jenkins-hbase4:37577] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:47,462 INFO [RS:0;jenkins-hbase4:37577] regionserver.LeaseManager(133): Closed leases 2023-06-03 09:01:47,462 INFO [RS:0;jenkins-hbase4:37577] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-03 09:01:47,463 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 09:01:47,463 INFO [RS:0;jenkins-hbase4:37577] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37577 2023-06-03 09:01:47,466 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:01:47,466 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37577,1685782819466 2023-06-03 09:01:47,466 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:01:47,466 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37577,1685782819466] 2023-06-03 09:01:47,466 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37577,1685782819466; numProcessing=1 2023-06-03 09:01:47,469 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37577,1685782819466 already deleted, retry=false 2023-06-03 09:01:47,469 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37577,1685782819466 expired; onlineServers=0 2023-06-03 09:01:47,469 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44421,1685782819425' ***** 2023-06-03 09:01:47,469 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-03 09:01:47,469 DEBUG [M:0;jenkins-hbase4:44421] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@eab1e7b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 09:01:47,470 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:01:47,470 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44421,1685782819425; all regions closed. 2023-06-03 09:01:47,470 DEBUG [M:0;jenkins-hbase4:44421] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:47,470 DEBUG [M:0;jenkins-hbase4:44421] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-03 09:01:47,470 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-03 09:01:47,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782819596] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782819596,5,FailOnTimeoutGroup] 2023-06-03 09:01:47,470 DEBUG [M:0;jenkins-hbase4:44421] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-03 09:01:47,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782819595] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782819595,5,FailOnTimeoutGroup] 2023-06-03 09:01:47,470 INFO [M:0;jenkins-hbase4:44421] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-03 09:01:47,470 INFO [M:0;jenkins-hbase4:44421] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-03 09:01:47,470 INFO [M:0;jenkins-hbase4:44421] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-03 09:01:47,470 DEBUG [M:0;jenkins-hbase4:44421] master.HMaster(1512): Stopping service threads 2023-06-03 09:01:47,470 INFO [M:0;jenkins-hbase4:44421] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-03 09:01:47,471 ERROR [M:0;jenkins-hbase4:44421] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-03 09:01:47,471 INFO [M:0;jenkins-hbase4:44421] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-03 09:01:47,471 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-03 09:01:47,472 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-03 09:01:47,472 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:47,472 DEBUG [M:0;jenkins-hbase4:44421] zookeeper.ZKUtil(398): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-03 09:01:47,472 WARN [M:0;jenkins-hbase4:44421] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-03 09:01:47,472 INFO [M:0;jenkins-hbase4:44421] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-03 09:01:47,472 INFO [M:0;jenkins-hbase4:44421] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-03 09:01:47,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:01:47,473 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 09:01:47,473 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:47,473 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:47,473 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 09:01:47,473 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:47,473 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.70 KB heapSize=78.42 KB 2023-06-03 09:01:47,484 INFO [M:0;jenkins-hbase4:44421] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.70 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e61a82e8e3324031b947aa1e55b473dc 2023-06-03 09:01:47,489 INFO [M:0;jenkins-hbase4:44421] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e61a82e8e3324031b947aa1e55b473dc 2023-06-03 09:01:47,491 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e61a82e8e3324031b947aa1e55b473dc as hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e61a82e8e3324031b947aa1e55b473dc 2023-06-03 09:01:47,496 INFO [M:0;jenkins-hbase4:44421] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e61a82e8e3324031b947aa1e55b473dc 2023-06-03 09:01:47,496 INFO [M:0;jenkins-hbase4:44421] regionserver.HStore(1080): Added hdfs://localhost:34905/user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e61a82e8e3324031b947aa1e55b473dc, entries=18, sequenceid=160, filesize=6.9 K 2023-06-03 09:01:47,497 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(2948): Finished flush of dataSize ~64.70 KB/66256, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=160, compaction requested=false 2023-06-03 09:01:47,498 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:47,498 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:01:47,498 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/07440448-799a-4612-c385-8d81a239a5d5/MasterData/WALs/jenkins-hbase4.apache.org,44421,1685782819425 2023-06-03 09:01:47,502 INFO [M:0;jenkins-hbase4:44421] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-03 09:01:47,502 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 09:01:47,502 INFO [M:0;jenkins-hbase4:44421] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44421 2023-06-03 09:01:47,504 DEBUG [M:0;jenkins-hbase4:44421] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44421,1685782819425 already deleted, retry=false 2023-06-03 09:01:47,568 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:47,568 INFO [RS:0;jenkins-hbase4:37577] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37577,1685782819466; zookeeper connection closed. 2023-06-03 09:01:47,568 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): regionserver:37577-0x1008feaf5540001, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:47,569 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6861df8a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6861df8a 2023-06-03 09:01:47,569 INFO [Listener at localhost/35195] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-03 09:01:47,668 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:47,668 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44421,1685782819425; zookeeper connection closed. 2023-06-03 09:01:47,668 DEBUG [Listener at localhost/35195-EventThread] zookeeper.ZKWatcher(600): master:44421-0x1008feaf5540000, quorum=127.0.0.1:64598, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:47,669 WARN [Listener at localhost/35195] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 09:01:47,673 INFO [Listener at localhost/35195] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:01:47,721 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-03 09:01:47,778 WARN [BP-1406057023-172.31.14.131-1685782818899 heartbeating to localhost/127.0.0.1:34905] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 09:01:47,778 WARN [BP-1406057023-172.31.14.131-1685782818899 heartbeating to localhost/127.0.0.1:34905] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1406057023-172.31.14.131-1685782818899 (Datanode Uuid a310907c-28e6-4c2d-bc18-c67838541530) service to localhost/127.0.0.1:34905 2023-06-03 09:01:47,779 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/dfs/data/data3/current/BP-1406057023-172.31.14.131-1685782818899] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:47,779 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/dfs/data/data4/current/BP-1406057023-172.31.14.131-1685782818899] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:47,781 WARN [Listener at localhost/35195] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 09:01:47,784 INFO [Listener at localhost/35195] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:01:47,889 WARN [BP-1406057023-172.31.14.131-1685782818899 heartbeating to localhost/127.0.0.1:34905] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 09:01:47,889 WARN [BP-1406057023-172.31.14.131-1685782818899 heartbeating to localhost/127.0.0.1:34905] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1406057023-172.31.14.131-1685782818899 (Datanode Uuid 97b1e5c9-1a6e-48de-a813-d915e254033f) service to localhost/127.0.0.1:34905 2023-06-03 09:01:47,890 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/dfs/data/data1/current/BP-1406057023-172.31.14.131-1685782818899] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:47,891 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/cluster_127e7cca-a14a-af83-3631-02931177ad02/dfs/data/data2/current/BP-1406057023-172.31.14.131-1685782818899] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:47,905 INFO [Listener at localhost/35195] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:01:48,021 INFO [Listener at localhost/35195] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-03 09:01:48,049 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-03 09:01:48,059 INFO [Listener at localhost/35195] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=105 (was 94) - Thread LEAK? -, OpenFileDescriptor=537 (was 500) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=86 (was 35) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=727 (was 1022) 2023-06-03 09:01:48,068 INFO [Listener at localhost/35195] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=105, OpenFileDescriptor=537, MaxFileDescriptor=60000, SystemLoadAverage=86, ProcessCount=169, AvailableMemoryMB=726 2023-06-03 09:01:48,068 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-03 09:01:48,068 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/hadoop.log.dir so I do NOT create it in target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c 2023-06-03 09:01:48,068 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fabec95b-f9fe-06fc-77b7-69bd691b3ddc/hadoop.tmp.dir so I do NOT create it in target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c 2023-06-03 09:01:48,068 INFO [Listener at localhost/35195] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a, deleteOnExit=true 2023-06-03 09:01:48,068 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/test.cache.data in system properties and HBase conf 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/hadoop.tmp.dir in system properties and HBase conf 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/hadoop.log.dir in system properties and HBase conf 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-03 09:01:48,069 DEBUG [Listener at localhost/35195] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-03 09:01:48,069 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/nfs.dump.dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/java.io.tmpdir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-03 09:01:48,070 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-03 09:01:48,071 INFO [Listener at localhost/35195] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-03 09:01:48,072 WARN [Listener at localhost/35195] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 09:01:48,075 WARN [Listener at localhost/35195] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 09:01:48,075 WARN [Listener at localhost/35195] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 09:01:48,115 WARN [Listener at localhost/35195] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 09:01:48,117 INFO [Listener at localhost/35195] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 09:01:48,122 INFO [Listener at localhost/35195] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/java.io.tmpdir/Jetty_localhost_44043_hdfs____lg5brt/webapp 2023-06-03 09:01:48,213 INFO [Listener at localhost/35195] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44043 2023-06-03 09:01:48,215 WARN [Listener at localhost/35195] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-03 09:01:48,217 WARN [Listener at localhost/35195] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-03 09:01:48,218 WARN [Listener at localhost/35195] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-03 09:01:48,259 WARN [Listener at localhost/36205] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 09:01:48,271 WARN [Listener at localhost/36205] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 09:01:48,273 WARN [Listener at localhost/36205] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 09:01:48,274 INFO [Listener at localhost/36205] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 09:01:48,279 INFO [Listener at localhost/36205] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/java.io.tmpdir/Jetty_localhost_33521_datanode____.2iku9x/webapp 2023-06-03 09:01:48,369 INFO [Listener at localhost/36205] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33521 2023-06-03 09:01:48,375 WARN [Listener at localhost/39563] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 09:01:48,386 WARN [Listener at localhost/39563] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-03 09:01:48,388 WARN [Listener at localhost/39563] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-03 09:01:48,388 INFO [Listener at localhost/39563] log.Slf4jLog(67): jetty-6.1.26 2023-06-03 09:01:48,392 INFO [Listener at localhost/39563] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/java.io.tmpdir/Jetty_localhost_43527_datanode____.98yl32/webapp 2023-06-03 09:01:48,459 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5cb39f37d3984502: Processing first storage report for DS-51843690-23fb-4f4a-b551-fab971345e2d from datanode 23288080-744f-4d03-9d8c-9ed6dad66b48 2023-06-03 09:01:48,459 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5cb39f37d3984502: from storage DS-51843690-23fb-4f4a-b551-fab971345e2d node DatanodeRegistration(127.0.0.1:33801, datanodeUuid=23288080-744f-4d03-9d8c-9ed6dad66b48, infoPort=37393, infoSecurePort=0, ipcPort=39563, storageInfo=lv=-57;cid=testClusterID;nsid=1890296984;c=1685782908077), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:01:48,459 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5cb39f37d3984502: Processing first storage report for DS-c004eecf-1388-4b5f-a878-9054925068f2 from datanode 23288080-744f-4d03-9d8c-9ed6dad66b48 2023-06-03 09:01:48,459 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5cb39f37d3984502: from storage DS-c004eecf-1388-4b5f-a878-9054925068f2 node DatanodeRegistration(127.0.0.1:33801, datanodeUuid=23288080-744f-4d03-9d8c-9ed6dad66b48, infoPort=37393, infoSecurePort=0, ipcPort=39563, storageInfo=lv=-57;cid=testClusterID;nsid=1890296984;c=1685782908077), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:01:48,487 INFO [Listener at localhost/39563] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43527 2023-06-03 09:01:48,492 WARN [Listener at localhost/40835] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-03 09:01:48,579 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x60554caae69cdd41: Processing first storage report for DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb from datanode af3cf6f5-dc06-4cc1-87be-cd7789582657 2023-06-03 09:01:48,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x60554caae69cdd41: from storage DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb node DatanodeRegistration(127.0.0.1:36623, datanodeUuid=af3cf6f5-dc06-4cc1-87be-cd7789582657, infoPort=43315, infoSecurePort=0, ipcPort=40835, storageInfo=lv=-57;cid=testClusterID;nsid=1890296984;c=1685782908077), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:01:48,579 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x60554caae69cdd41: Processing first storage report for DS-c93c9502-f10e-4fb6-86a8-9561e7e378ab from datanode af3cf6f5-dc06-4cc1-87be-cd7789582657 2023-06-03 09:01:48,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x60554caae69cdd41: from storage DS-c93c9502-f10e-4fb6-86a8-9561e7e378ab node DatanodeRegistration(127.0.0.1:36623, datanodeUuid=af3cf6f5-dc06-4cc1-87be-cd7789582657, infoPort=43315, infoSecurePort=0, ipcPort=40835, storageInfo=lv=-57;cid=testClusterID;nsid=1890296984;c=1685782908077), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-03 09:01:48,599 DEBUG [Listener at localhost/40835] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c 2023-06-03 09:01:48,602 INFO [Listener at localhost/40835] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/zookeeper_0, clientPort=50636, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-03 09:01:48,603 INFO [Listener at localhost/40835] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50636 2023-06-03 09:01:48,603 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,604 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,616 INFO [Listener at localhost/40835] util.FSUtils(471): Created version file at hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445 with version=8 2023-06-03 09:01:48,616 INFO [Listener at localhost/40835] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36003/user/jenkins/test-data/9773cf35-9f47-2c98-3863-fb7fa286563f/hbase-staging 2023-06-03 09:01:48,618 INFO [Listener at localhost/40835] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 09:01:48,618 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:01:48,619 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 09:01:48,619 INFO [Listener at localhost/40835] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 09:01:48,619 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:01:48,619 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 09:01:48,619 INFO [Listener at localhost/40835] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 09:01:48,620 INFO [Listener at localhost/40835] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34193 2023-06-03 09:01:48,620 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,621 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,622 INFO [Listener at localhost/40835] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34193 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-06-03 09:01:48,628 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:341930x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 09:01:48,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34193-0x1008fec51bc0000 connected 2023-06-03 09:01:48,641 DEBUG [Listener at localhost/40835] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:01:48,641 DEBUG [Listener at localhost/40835] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:01:48,642 DEBUG [Listener at localhost/40835] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 09:01:48,642 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34193 2023-06-03 09:01:48,642 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34193 2023-06-03 09:01:48,642 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34193 2023-06-03 09:01:48,643 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34193 2023-06-03 09:01:48,643 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34193 2023-06-03 09:01:48,643 INFO [Listener at localhost/40835] master.HMaster(444): hbase.rootdir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445, hbase.cluster.distributed=false 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-03 09:01:48,656 INFO [Listener at localhost/40835] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-03 09:01:48,657 INFO [Listener at localhost/40835] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33407 2023-06-03 09:01:48,658 INFO [Listener at localhost/40835] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-03 09:01:48,659 DEBUG [Listener at localhost/40835] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-03 09:01:48,659 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,660 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,661 INFO [Listener at localhost/40835] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33407 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-06-03 09:01:48,664 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:334070x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-03 09:01:48,665 DEBUG [Listener at localhost/40835] zookeeper.ZKUtil(164): regionserver:334070x0, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:01:48,666 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33407-0x1008fec51bc0001 connected 2023-06-03 09:01:48,666 DEBUG [Listener at localhost/40835] zookeeper.ZKUtil(164): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:01:48,667 DEBUG [Listener at localhost/40835] zookeeper.ZKUtil(164): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-03 09:01:48,670 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33407 2023-06-03 09:01:48,671 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33407 2023-06-03 09:01:48,673 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33407 2023-06-03 09:01:48,674 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33407 2023-06-03 09:01:48,674 DEBUG [Listener at localhost/40835] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33407 2023-06-03 09:01:48,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,677 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 09:01:48,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,685 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 09:01:48,685 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-03 09:01:48,685 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 09:01:48,687 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-03 09:01:48,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34193,1685782908618 from backup master directory 2023-06-03 09:01:48,688 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,688 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-03 09:01:48,688 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 09:01:48,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/hbase.id with ID: dd41408d-d593-4cfc-b35e-84b7ef10aa37 2023-06-03 09:01:48,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:48,710 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x68c81c03 to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 09:01:48,720 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44a6fee9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 09:01:48,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-03 09:01:48,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-03 09:01:48,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:01:48,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store-tmp 2023-06-03 09:01:48,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:01:48,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 09:01:48,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:48,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:48,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 09:01:48,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:48,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:48,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:01:48,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/WALs/jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34193%2C1685782908618, suffix=, logDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/WALs/jenkins-hbase4.apache.org,34193,1685782908618, archiveDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/oldWALs, maxLogs=10 2023-06-03 09:01:48,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/WALs/jenkins-hbase4.apache.org,34193,1685782908618/jenkins-hbase4.apache.org%2C34193%2C1685782908618.1685782908730 2023-06-03 09:01:48,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33801,DS-51843690-23fb-4f4a-b551-fab971345e2d,DISK], DatanodeInfoWithStorage[127.0.0.1:36623,DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb,DISK]] 2023-06-03 09:01:48,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:01:48,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:01:48,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:01:48,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:01:48,738 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:01:48,739 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-03 09:01:48,739 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-03 09:01:48,740 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:48,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:01:48,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:01:48,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-03 09:01:48,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:01:48,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=704342, jitterRate=-0.10438340902328491}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:01:48,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:01:48,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-03 09:01:48,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-03 09:01:48,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-03 09:01:48,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-03 09:01:48,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-03 09:01:48,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-03 09:01:48,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-03 09:01:48,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-03 09:01:48,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-03 09:01:48,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-03 09:01:48,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-03 09:01:48,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-03 09:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-03 09:01:48,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-03 09:01:48,762 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-03 09:01:48,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-03 09:01:48,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-03 09:01:48,764 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 09:01:48,764 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-03 09:01:48,765 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34193,1685782908618, sessionid=0x1008fec51bc0000, setting cluster-up flag (Was=false) 2023-06-03 09:01:48,769 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,773 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-03 09:01:48,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,777 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-03 09:01:48,782 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:48,782 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/.hbase-snapshot/.tmp 2023-06-03 09:01:48,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 09:01:48,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,787 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685782938787 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-03 09:01:48,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,788 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 09:01:48,789 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-03 09:01:48,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782908789,5,FailOnTimeoutGroup] 2023-06-03 09:01:48,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782908789,5,FailOnTimeoutGroup] 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,789 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 09:01:48,797 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 09:01:48,798 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-03 09:01:48,798 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445 2023-06-03 09:01:48,804 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:01:48,805 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 09:01:48,806 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/info 2023-06-03 09:01:48,807 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 09:01:48,807 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:48,807 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 09:01:48,808 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/rep_barrier 2023-06-03 09:01:48,809 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 09:01:48,809 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:48,809 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 09:01:48,810 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/table 2023-06-03 09:01:48,810 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 09:01:48,811 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:48,811 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740 2023-06-03 09:01:48,812 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740 2023-06-03 09:01:48,814 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 09:01:48,815 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 09:01:48,816 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:01:48,817 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=722443, jitterRate=-0.08136720955371857}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 09:01:48,817 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 09:01:48,817 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 09:01:48,817 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 09:01:48,817 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 09:01:48,817 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 09:01:48,817 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 09:01:48,817 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 09:01:48,817 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 09:01:48,818 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-03 09:01:48,818 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-03 09:01:48,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-03 09:01:48,820 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-03 09:01:48,821 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-03 09:01:48,876 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(951): ClusterId : dd41408d-d593-4cfc-b35e-84b7ef10aa37 2023-06-03 09:01:48,876 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-03 09:01:48,879 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-03 09:01:48,879 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-03 09:01:48,881 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-03 09:01:48,882 DEBUG [RS:0;jenkins-hbase4:33407] zookeeper.ReadOnlyZKClient(139): Connect 0x2b922277 to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 09:01:48,885 DEBUG [RS:0;jenkins-hbase4:33407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2555c975, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 09:01:48,885 DEBUG [RS:0;jenkins-hbase4:33407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@216a4907, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 09:01:48,894 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33407 2023-06-03 09:01:48,894 INFO [RS:0;jenkins-hbase4:33407] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-03 09:01:48,894 INFO [RS:0;jenkins-hbase4:33407] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-03 09:01:48,894 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1022): About to register with Master. 2023-06-03 09:01:48,895 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,34193,1685782908618 with isa=jenkins-hbase4.apache.org/172.31.14.131:33407, startcode=1685782908655 2023-06-03 09:01:48,895 DEBUG [RS:0;jenkins-hbase4:33407] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-03 09:01:48,898 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53263, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-03 09:01:48,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34193] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:48,899 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445 2023-06-03 09:01:48,899 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36205 2023-06-03 09:01:48,899 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-03 09:01:48,901 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:01:48,901 DEBUG [RS:0;jenkins-hbase4:33407] zookeeper.ZKUtil(162): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:48,901 WARN [RS:0;jenkins-hbase4:33407] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-03 09:01:48,901 INFO [RS:0;jenkins-hbase4:33407] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:01:48,902 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1946): logDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:48,902 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33407,1685782908655] 2023-06-03 09:01:48,905 DEBUG [RS:0;jenkins-hbase4:33407] zookeeper.ZKUtil(162): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:48,906 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-03 09:01:48,906 INFO [RS:0;jenkins-hbase4:33407] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-03 09:01:48,907 INFO [RS:0;jenkins-hbase4:33407] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-03 09:01:48,907 INFO [RS:0;jenkins-hbase4:33407] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-03 09:01:48,907 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,907 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-03 09:01:48,908 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,909 DEBUG [RS:0;jenkins-hbase4:33407] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-03 09:01:48,910 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,910 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,910 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,920 INFO [RS:0;jenkins-hbase4:33407] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-03 09:01:48,920 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33407,1685782908655-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:48,931 INFO [RS:0;jenkins-hbase4:33407] regionserver.Replication(203): jenkins-hbase4.apache.org,33407,1685782908655 started 2023-06-03 09:01:48,931 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33407,1685782908655, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33407, sessionid=0x1008fec51bc0001 2023-06-03 09:01:48,931 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-03 09:01:48,931 DEBUG [RS:0;jenkins-hbase4:33407] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:48,931 DEBUG [RS:0;jenkins-hbase4:33407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33407,1685782908655' 2023-06-03 09:01:48,931 DEBUG [RS:0;jenkins-hbase4:33407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33407,1685782908655' 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-03 09:01:48,932 DEBUG [RS:0;jenkins-hbase4:33407] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-03 09:01:48,932 INFO [RS:0;jenkins-hbase4:33407] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-03 09:01:48,932 INFO [RS:0;jenkins-hbase4:33407] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-03 09:01:48,971 DEBUG [jenkins-hbase4:34193] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-03 09:01:48,972 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33407,1685782908655, state=OPENING 2023-06-03 09:01:48,974 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-03 09:01:48,975 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:48,975 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 09:01:48,975 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33407,1685782908655}] 2023-06-03 09:01:49,034 INFO [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33407%2C1685782908655, suffix=, logDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655, archiveDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs, maxLogs=32 2023-06-03 09:01:49,041 INFO [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655/jenkins-hbase4.apache.org%2C33407%2C1685782908655.1685782909035 2023-06-03 09:01:49,041 DEBUG [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36623,DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb,DISK], DatanodeInfoWithStorage[127.0.0.1:33801,DS-51843690-23fb-4f4a-b551-fab971345e2d,DISK]] 2023-06-03 09:01:49,129 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,129 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-03 09:01:49,131 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-03 09:01:49,135 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-03 09:01:49,135 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:01:49,137 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33407%2C1685782908655.meta, suffix=.meta, logDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655, archiveDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs, maxLogs=32 2023-06-03 09:01:49,145 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655/jenkins-hbase4.apache.org%2C33407%2C1685782908655.meta.1685782909137.meta 2023-06-03 09:01:49,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33801,DS-51843690-23fb-4f4a-b551-fab971345e2d,DISK], DatanodeInfoWithStorage[127.0.0.1:36623,DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb,DISK]] 2023-06-03 09:01:49,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:01:49,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-03 09:01:49,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-03 09:01:49,145 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-03 09:01:49,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-03 09:01:49,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:01:49,146 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-03 09:01:49,146 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-03 09:01:49,147 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-03 09:01:49,148 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/info 2023-06-03 09:01:49,148 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/info 2023-06-03 09:01:49,148 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-03 09:01:49,148 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:49,148 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-03 09:01:49,149 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/rep_barrier 2023-06-03 09:01:49,149 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/rep_barrier 2023-06-03 09:01:49,149 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-03 09:01:49,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:49,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-03 09:01:49,151 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/table 2023-06-03 09:01:49,151 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/table 2023-06-03 09:01:49,151 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-03 09:01:49,152 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:49,152 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740 2023-06-03 09:01:49,153 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740 2023-06-03 09:01:49,155 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-03 09:01:49,156 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-03 09:01:49,157 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=856254, jitterRate=0.08878344297409058}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-03 09:01:49,157 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-03 09:01:49,158 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685782909129 2023-06-03 09:01:49,161 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-03 09:01:49,162 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-03 09:01:49,162 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33407,1685782908655, state=OPEN 2023-06-03 09:01:49,164 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-03 09:01:49,164 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-03 09:01:49,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-03 09:01:49,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33407,1685782908655 in 189 msec 2023-06-03 09:01:49,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-03 09:01:49,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 348 msec 2023-06-03 09:01:49,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 385 msec 2023-06-03 09:01:49,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685782909169, completionTime=-1 2023-06-03 09:01:49,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-03 09:01:49,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-03 09:01:49,171 DEBUG [hconnection-0x1eb9a079-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 09:01:49,173 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42384, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 09:01:49,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-03 09:01:49,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685782969175 2023-06-03 09:01:49,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685783029175 2023-06-03 09:01:49,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-06-03 09:01:49,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34193,1685782908618-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:49,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34193,1685782908618-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:49,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34193,1685782908618-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:49,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34193, period=300000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:49,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-03 09:01:49,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-03 09:01:49,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-03 09:01:49,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-03 09:01:49,184 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-03 09:01:49,184 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-03 09:01:49,185 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-03 09:01:49,186 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/.tmp/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,187 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/.tmp/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe empty. 2023-06-03 09:01:49,187 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/.tmp/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,187 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-03 09:01:49,198 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-03 09:01:49,200 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 06c25c555b5e9c3e33c6a7da8ce17ffe, NAME => 'hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/.tmp 2023-06-03 09:01:49,206 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:01:49,206 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 06c25c555b5e9c3e33c6a7da8ce17ffe, disabling compactions & flushes 2023-06-03 09:01:49,206 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,206 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,206 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. after waiting 0 ms 2023-06-03 09:01:49,206 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,206 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,206 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 06c25c555b5e9c3e33c6a7da8ce17ffe: 2023-06-03 09:01:49,208 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-03 09:01:49,209 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782909209"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685782909209"}]},"ts":"1685782909209"} 2023-06-03 09:01:49,211 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-03 09:01:49,212 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-03 09:01:49,212 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782909212"}]},"ts":"1685782909212"} 2023-06-03 09:01:49,213 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-03 09:01:49,220 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=06c25c555b5e9c3e33c6a7da8ce17ffe, ASSIGN}] 2023-06-03 09:01:49,222 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=06c25c555b5e9c3e33c6a7da8ce17ffe, ASSIGN 2023-06-03 09:01:49,223 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=06c25c555b5e9c3e33c6a7da8ce17ffe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33407,1685782908655; forceNewPlan=false, retain=false 2023-06-03 09:01:49,374 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=06c25c555b5e9c3e33c6a7da8ce17ffe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,374 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782909374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685782909374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685782909374"}]},"ts":"1685782909374"} 2023-06-03 09:01:49,376 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 06c25c555b5e9c3e33c6a7da8ce17ffe, server=jenkins-hbase4.apache.org,33407,1685782908655}] 2023-06-03 09:01:49,531 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 06c25c555b5e9c3e33c6a7da8ce17ffe, NAME => 'hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.', STARTKEY => '', ENDKEY => ''} 2023-06-03 09:01:49,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-03 09:01:49,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,532 INFO [StoreOpener-06c25c555b5e9c3e33c6a7da8ce17ffe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,533 DEBUG [StoreOpener-06c25c555b5e9c3e33c6a7da8ce17ffe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/info 2023-06-03 09:01:49,534 DEBUG [StoreOpener-06c25c555b5e9c3e33c6a7da8ce17ffe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/info 2023-06-03 09:01:49,534 INFO [StoreOpener-06c25c555b5e9c3e33c6a7da8ce17ffe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 06c25c555b5e9c3e33c6a7da8ce17ffe columnFamilyName info 2023-06-03 09:01:49,534 INFO [StoreOpener-06c25c555b5e9c3e33c6a7da8ce17ffe-1] regionserver.HStore(310): Store=06c25c555b5e9c3e33c6a7da8ce17ffe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-03 09:01:49,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,538 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,540 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-03 09:01:49,540 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 06c25c555b5e9c3e33c6a7da8ce17ffe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828097, jitterRate=0.05298013985157013}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-03 09:01:49,540 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 06c25c555b5e9c3e33c6a7da8ce17ffe: 2023-06-03 09:01:49,542 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe., pid=6, masterSystemTime=1685782909527 2023-06-03 09:01:49,544 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,544 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,545 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=06c25c555b5e9c3e33c6a7da8ce17ffe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,545 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685782909545"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685782909545"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685782909545"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685782909545"}]},"ts":"1685782909545"} 2023-06-03 09:01:49,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-03 09:01:49,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 06c25c555b5e9c3e33c6a7da8ce17ffe, server=jenkins-hbase4.apache.org,33407,1685782908655 in 171 msec 2023-06-03 09:01:49,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-03 09:01:49,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=06c25c555b5e9c3e33c6a7da8ce17ffe, ASSIGN in 329 msec 2023-06-03 09:01:49,551 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-03 09:01:49,551 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685782909551"}]},"ts":"1685782909551"} 2023-06-03 09:01:49,552 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-03 09:01:49,555 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-03 09:01:49,557 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 373 msec 2023-06-03 09:01:49,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-03 09:01:49,585 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-03 09:01:49,585 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:49,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-03 09:01:49,597 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 09:01:49,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-06-03 09:01:49,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-03 09:01:49,619 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-03 09:01:49,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-06-03 09:01:49,635 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-03 09:01:49,639 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-03 09:01:49,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.951sec 2023-06-03 09:01:49,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-03 09:01:49,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-03 09:01:49,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-03 09:01:49,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34193,1685782908618-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-03 09:01:49,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34193,1685782908618-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-03 09:01:49,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-03 09:01:49,676 DEBUG [Listener at localhost/40835] zookeeper.ReadOnlyZKClient(139): Connect 0x2936824d to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-03 09:01:49,681 DEBUG [Listener at localhost/40835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f917e32, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-03 09:01:49,682 DEBUG [hconnection-0x796e2b20-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-03 09:01:49,684 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42394, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-03 09:01:49,685 INFO [Listener at localhost/40835] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:49,685 INFO [Listener at localhost/40835] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-03 09:01:49,689 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-03 09:01:49,689 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:49,689 INFO [Listener at localhost/40835] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-03 09:01:49,690 INFO [Listener at localhost/40835] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-03 09:01:49,691 INFO [Listener at localhost/40835] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1, archiveDir=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs, maxLogs=32 2023-06-03 09:01:49,700 INFO [Listener at localhost/40835] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1/test.com%2C8080%2C1.1685782909692 2023-06-03 09:01:49,700 DEBUG [Listener at localhost/40835] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36623,DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb,DISK], DatanodeInfoWithStorage[127.0.0.1:33801,DS-51843690-23fb-4f4a-b551-fab971345e2d,DISK]] 2023-06-03 09:01:49,706 INFO [Listener at localhost/40835] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1/test.com%2C8080%2C1.1685782909692 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1/test.com%2C8080%2C1.1685782909700 2023-06-03 09:01:49,707 DEBUG [Listener at localhost/40835] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36623,DS-6e08d5cc-bdb0-4ee6-9657-2df53e6dfecb,DISK], DatanodeInfoWithStorage[127.0.0.1:33801,DS-51843690-23fb-4f4a-b551-fab971345e2d,DISK]] 2023-06-03 09:01:49,707 DEBUG [Listener at localhost/40835] wal.AbstractFSWAL(716): hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1/test.com%2C8080%2C1.1685782909692 is not closed yet, will try archiving it next time 2023-06-03 09:01:49,708 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1 2023-06-03 09:01:49,716 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/test.com,8080,1/test.com%2C8080%2C1.1685782909692 to hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs/test.com%2C8080%2C1.1685782909692 2023-06-03 09:01:49,718 DEBUG [Listener at localhost/40835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs 2023-06-03 09:01:49,718 INFO [Listener at localhost/40835] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685782909700) 2023-06-03 09:01:49,718 INFO [Listener at localhost/40835] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-03 09:01:49,718 DEBUG [Listener at localhost/40835] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2936824d to 127.0.0.1:50636 2023-06-03 09:01:49,718 DEBUG [Listener at localhost/40835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:49,719 DEBUG [Listener at localhost/40835] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-03 09:01:49,719 DEBUG [Listener at localhost/40835] util.JVMClusterUtil(257): Found active master hash=1278660959, stopped=false 2023-06-03 09:01:49,719 INFO [Listener at localhost/40835] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:49,721 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 09:01:49,721 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-03 09:01:49,721 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:49,721 INFO [Listener at localhost/40835] procedure2.ProcedureExecutor(629): Stopping 2023-06-03 09:01:49,722 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:01:49,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-03 09:01:49,723 DEBUG [Listener at localhost/40835] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x68c81c03 to 127.0.0.1:50636 2023-06-03 09:01:49,723 DEBUG [Listener at localhost/40835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:49,723 INFO [Listener at localhost/40835] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33407,1685782908655' ***** 2023-06-03 09:01:49,723 INFO [Listener at localhost/40835] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-03 09:01:49,723 INFO [RS:0;jenkins-hbase4:33407] regionserver.HeapMemoryManager(220): Stopping 2023-06-03 09:01:49,724 INFO [RS:0;jenkins-hbase4:33407] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-03 09:01:49,724 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-03 09:01:49,724 INFO [RS:0;jenkins-hbase4:33407] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-03 09:01:49,724 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(3303): Received CLOSE for 06c25c555b5e9c3e33c6a7da8ce17ffe 2023-06-03 09:01:49,724 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,724 DEBUG [RS:0;jenkins-hbase4:33407] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2b922277 to 127.0.0.1:50636 2023-06-03 09:01:49,724 DEBUG [RS:0;jenkins-hbase4:33407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:49,725 INFO [RS:0;jenkins-hbase4:33407] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-03 09:01:49,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 06c25c555b5e9c3e33c6a7da8ce17ffe, disabling compactions & flushes 2023-06-03 09:01:49,725 INFO [RS:0;jenkins-hbase4:33407] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-03 09:01:49,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,725 INFO [RS:0;jenkins-hbase4:33407] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-03 09:01:49,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,725 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-03 09:01:49,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. after waiting 0 ms 2023-06-03 09:01:49,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 06c25c555b5e9c3e33c6a7da8ce17ffe 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-03 09:01:49,726 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-03 09:01:49,727 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 06c25c555b5e9c3e33c6a7da8ce17ffe=hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe.} 2023-06-03 09:01:49,727 DEBUG [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1504): Waiting on 06c25c555b5e9c3e33c6a7da8ce17ffe, 1588230740 2023-06-03 09:01:49,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-03 09:01:49,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-03 09:01:49,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-03 09:01:49,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-03 09:01:49,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-03 09:01:49,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-03 09:01:49,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/.tmp/info/f230fce97ea44d9db1b8c53ccca7773b 2023-06-03 09:01:49,744 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/.tmp/info/b6f09da81a194de0af6ba6740530a78d 2023-06-03 09:01:49,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/.tmp/info/f230fce97ea44d9db1b8c53ccca7773b as hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/info/f230fce97ea44d9db1b8c53ccca7773b 2023-06-03 09:01:49,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/info/f230fce97ea44d9db1b8c53ccca7773b, entries=2, sequenceid=6, filesize=4.8 K 2023-06-03 09:01:49,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 06c25c555b5e9c3e33c6a7da8ce17ffe in 27ms, sequenceid=6, compaction requested=false 2023-06-03 09:01:49,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-03 09:01:49,758 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/.tmp/table/147c1fa3593f45b887eeb17f2df50c3e 2023-06-03 09:01:49,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/namespace/06c25c555b5e9c3e33c6a7da8ce17ffe/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-03 09:01:49,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 06c25c555b5e9c3e33c6a7da8ce17ffe: 2023-06-03 09:01:49,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685782909182.06c25c555b5e9c3e33c6a7da8ce17ffe. 2023-06-03 09:01:49,762 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/.tmp/info/b6f09da81a194de0af6ba6740530a78d as hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/info/b6f09da81a194de0af6ba6740530a78d 2023-06-03 09:01:49,766 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/info/b6f09da81a194de0af6ba6740530a78d, entries=10, sequenceid=9, filesize=5.9 K 2023-06-03 09:01:49,767 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/.tmp/table/147c1fa3593f45b887eeb17f2df50c3e as hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/table/147c1fa3593f45b887eeb17f2df50c3e 2023-06-03 09:01:49,770 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/table/147c1fa3593f45b887eeb17f2df50c3e, entries=2, sequenceid=9, filesize=4.7 K 2023-06-03 09:01:49,771 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 44ms, sequenceid=9, compaction requested=false 2023-06-03 09:01:49,771 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-03 09:01:49,777 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-03 09:01:49,778 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-03 09:01:49,778 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-03 09:01:49,778 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-03 09:01:49,778 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-03 09:01:49,927 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33407,1685782908655; all regions closed. 2023-06-03 09:01:49,928 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,933 DEBUG [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs 2023-06-03 09:01:49,933 INFO [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33407%2C1685782908655.meta:.meta(num 1685782909137) 2023-06-03 09:01:49,933 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/WALs/jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,937 DEBUG [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/oldWALs 2023-06-03 09:01:49,937 INFO [RS:0;jenkins-hbase4:33407] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33407%2C1685782908655:(num 1685782909035) 2023-06-03 09:01:49,937 DEBUG [RS:0;jenkins-hbase4:33407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:49,937 INFO [RS:0;jenkins-hbase4:33407] regionserver.LeaseManager(133): Closed leases 2023-06-03 09:01:49,938 INFO [RS:0;jenkins-hbase4:33407] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-03 09:01:49,938 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 09:01:49,938 INFO [RS:0;jenkins-hbase4:33407] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33407 2023-06-03 09:01:49,941 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33407,1685782908655 2023-06-03 09:01:49,941 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:01:49,941 ERROR [Listener at localhost/40835-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1ee57a8e rejected from java.util.concurrent.ThreadPoolExecutor@1adaed8a[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-06-03 09:01:49,942 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-03 09:01:49,943 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33407,1685782908655] 2023-06-03 09:01:49,943 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33407,1685782908655; numProcessing=1 2023-06-03 09:01:49,944 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33407,1685782908655 already deleted, retry=false 2023-06-03 09:01:49,944 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33407,1685782908655 expired; onlineServers=0 2023-06-03 09:01:49,944 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34193,1685782908618' ***** 2023-06-03 09:01:49,944 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-03 09:01:49,944 DEBUG [M:0;jenkins-hbase4:34193] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48094c01, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-03 09:01:49,944 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:49,944 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34193,1685782908618; all regions closed. 2023-06-03 09:01:49,944 DEBUG [M:0;jenkins-hbase4:34193] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-03 09:01:49,944 DEBUG [M:0;jenkins-hbase4:34193] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-03 09:01:49,944 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-03 09:01:49,945 DEBUG [M:0;jenkins-hbase4:34193] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-03 09:01:49,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782908789] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685782908789,5,FailOnTimeoutGroup] 2023-06-03 09:01:49,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782908789] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685782908789,5,FailOnTimeoutGroup] 2023-06-03 09:01:49,945 INFO [M:0;jenkins-hbase4:34193] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-03 09:01:49,946 INFO [M:0;jenkins-hbase4:34193] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-03 09:01:49,946 INFO [M:0;jenkins-hbase4:34193] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-03 09:01:49,946 DEBUG [M:0;jenkins-hbase4:34193] master.HMaster(1512): Stopping service threads 2023-06-03 09:01:49,946 INFO [M:0;jenkins-hbase4:34193] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-03 09:01:49,946 ERROR [M:0;jenkins-hbase4:34193] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-03 09:01:49,946 INFO [M:0;jenkins-hbase4:34193] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-03 09:01:49,946 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-03 09:01:49,947 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-03 09:01:49,947 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-03 09:01:49,947 DEBUG [M:0;jenkins-hbase4:34193] zookeeper.ZKUtil(398): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-03 09:01:49,947 WARN [M:0;jenkins-hbase4:34193] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-03 09:01:49,948 INFO [M:0;jenkins-hbase4:34193] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-03 09:01:49,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-03 09:01:49,948 INFO [M:0;jenkins-hbase4:34193] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-03 09:01:49,948 DEBUG [M:0;jenkins-hbase4:34193] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-03 09:01:49,949 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:49,949 DEBUG [M:0;jenkins-hbase4:34193] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:49,949 DEBUG [M:0;jenkins-hbase4:34193] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-03 09:01:49,949 DEBUG [M:0;jenkins-hbase4:34193] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:49,949 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-06-03 09:01:49,957 INFO [M:0;jenkins-hbase4:34193] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cf40fa7d7ebf4d4cb3e5f9f0622615c8 2023-06-03 09:01:49,962 DEBUG [M:0;jenkins-hbase4:34193] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cf40fa7d7ebf4d4cb3e5f9f0622615c8 as hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cf40fa7d7ebf4d4cb3e5f9f0622615c8 2023-06-03 09:01:49,966 INFO [M:0;jenkins-hbase4:34193] regionserver.HStore(1080): Added hdfs://localhost:36205/user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cf40fa7d7ebf4d4cb3e5f9f0622615c8, entries=8, sequenceid=66, filesize=6.3 K 2023-06-03 09:01:49,968 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 19ms, sequenceid=66, compaction requested=false 2023-06-03 09:01:49,969 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-03 09:01:49,969 DEBUG [M:0;jenkins-hbase4:34193] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-03 09:01:49,969 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/33c24464-f8e1-72a9-7b5a-eaf4692e5445/MasterData/WALs/jenkins-hbase4.apache.org,34193,1685782908618 2023-06-03 09:01:49,972 INFO [M:0;jenkins-hbase4:34193] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-03 09:01:49,972 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-03 09:01:49,973 INFO [M:0;jenkins-hbase4:34193] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34193 2023-06-03 09:01:49,975 DEBUG [M:0;jenkins-hbase4:34193] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34193,1685782908618 already deleted, retry=false 2023-06-03 09:01:50,121 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:50,121 INFO [M:0;jenkins-hbase4:34193] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34193,1685782908618; zookeeper connection closed. 2023-06-03 09:01:50,121 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): master:34193-0x1008fec51bc0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:50,221 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:50,221 INFO [RS:0;jenkins-hbase4:33407] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33407,1685782908655; zookeeper connection closed. 2023-06-03 09:01:50,221 DEBUG [Listener at localhost/40835-EventThread] zookeeper.ZKWatcher(600): regionserver:33407-0x1008fec51bc0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-03 09:01:50,222 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@288bf00d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@288bf00d 2023-06-03 09:01:50,222 INFO [Listener at localhost/40835] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-03 09:01:50,222 WARN [Listener at localhost/40835] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 09:01:50,227 INFO [Listener at localhost/40835] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:01:50,332 WARN [BP-748071887-172.31.14.131-1685782908077 heartbeating to localhost/127.0.0.1:36205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 09:01:50,332 WARN [BP-748071887-172.31.14.131-1685782908077 heartbeating to localhost/127.0.0.1:36205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-748071887-172.31.14.131-1685782908077 (Datanode Uuid af3cf6f5-dc06-4cc1-87be-cd7789582657) service to localhost/127.0.0.1:36205 2023-06-03 09:01:50,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/dfs/data/data3/current/BP-748071887-172.31.14.131-1685782908077] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:50,333 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/dfs/data/data4/current/BP-748071887-172.31.14.131-1685782908077] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:50,333 WARN [Listener at localhost/40835] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-03 09:01:50,336 INFO [Listener at localhost/40835] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:01:50,438 WARN [BP-748071887-172.31.14.131-1685782908077 heartbeating to localhost/127.0.0.1:36205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-03 09:01:50,438 WARN [BP-748071887-172.31.14.131-1685782908077 heartbeating to localhost/127.0.0.1:36205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-748071887-172.31.14.131-1685782908077 (Datanode Uuid 23288080-744f-4d03-9d8c-9ed6dad66b48) service to localhost/127.0.0.1:36205 2023-06-03 09:01:50,439 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/dfs/data/data1/current/BP-748071887-172.31.14.131-1685782908077] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:50,439 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cd4e437c-0960-e3c4-9b94-3442ce6bca3c/cluster_a14511d1-bff9-3179-9155-71015ab4659a/dfs/data/data2/current/BP-748071887-172.31.14.131-1685782908077] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-03 09:01:50,449 INFO [Listener at localhost/40835] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-03 09:01:50,561 INFO [Listener at localhost/40835] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-03 09:01:50,571 INFO [Listener at localhost/40835] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-03 09:01:50,583 INFO [Listener at localhost/40835] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=130 (was 105) - Thread LEAK? -, OpenFileDescriptor=560 (was 537) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=104 (was 86) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=723 (was 726)