2023-05-29 12:55:26,015 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba 2023-05-29 12:55:26,031 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-29 12:55:26,066 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=206, ProcessCount=168, AvailableMemoryMB=4552 2023-05-29 12:55:26,073 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 12:55:26,074 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c, deleteOnExit=true 2023-05-29 12:55:26,074 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 12:55:26,075 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/test.cache.data in system properties and HBase conf 2023-05-29 12:55:26,075 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 12:55:26,076 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/hadoop.log.dir in system properties and HBase conf 2023-05-29 12:55:26,076 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 12:55:26,077 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 12:55:26,077 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 12:55:26,188 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-29 12:55:26,560 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 12:55:26,564 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:55:26,564 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:55:26,565 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 12:55:26,565 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:55:26,565 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 12:55:26,566 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 12:55:26,566 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:55:26,566 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:55:26,567 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 12:55:26,567 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/nfs.dump.dir in system properties and HBase conf 2023-05-29 12:55:26,568 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/java.io.tmpdir in system properties and HBase conf 2023-05-29 12:55:26,568 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:55:26,569 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 12:55:26,569 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 12:55:27,057 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:55:27,071 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:55:27,075 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:55:27,378 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-29 12:55:27,556 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-29 12:55:27,570 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:55:27,602 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:55:27,661 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/java.io.tmpdir/Jetty_localhost_37599_hdfs____.rd47xh/webapp 2023-05-29 12:55:27,797 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37599 2023-05-29 12:55:27,804 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:55:27,807 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:55:27,807 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:55:28,215 WARN [Listener at localhost/40317] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:55:28,291 WARN [Listener at localhost/40317] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:55:28,312 WARN [Listener at localhost/40317] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:55:28,318 INFO [Listener at localhost/40317] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:55:28,322 INFO [Listener at localhost/40317] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/java.io.tmpdir/Jetty_localhost_36637_datanode____un3z9a/webapp 2023-05-29 12:55:28,423 INFO [Listener at localhost/40317] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36637 2023-05-29 12:55:28,748 WARN [Listener at localhost/46145] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:55:28,760 WARN [Listener at localhost/46145] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:55:28,763 WARN [Listener at localhost/46145] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:55:28,766 INFO [Listener at localhost/46145] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:55:28,770 INFO [Listener at localhost/46145] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/java.io.tmpdir/Jetty_localhost_44159_datanode____.a5a6is/webapp 2023-05-29 12:55:28,865 INFO [Listener at localhost/46145] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44159 2023-05-29 12:55:28,873 WARN [Listener at localhost/46381] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:55:29,187 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x338ecd78706345d7: Processing first storage report for DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453 from datanode 18a4d8fc-7a15-4b8c-96b2-f86761de45fc 2023-05-29 12:55:29,188 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x338ecd78706345d7: from storage DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453 node DatanodeRegistration(127.0.0.1:41803, datanodeUuid=18a4d8fc-7a15-4b8c-96b2-f86761de45fc, infoPort=42927, infoSecurePort=0, ipcPort=46145, storageInfo=lv=-57;cid=testClusterID;nsid=1023407;c=1685364927147), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:55:29,188 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6339123c4f1c7988: Processing first storage report for DS-42a060da-f71b-4bdc-b552-7803dc51a3ec from datanode 2d444b52-d8c8-43d6-b01c-7ffad814bc5c 2023-05-29 12:55:29,188 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6339123c4f1c7988: from storage DS-42a060da-f71b-4bdc-b552-7803dc51a3ec node DatanodeRegistration(127.0.0.1:34347, datanodeUuid=2d444b52-d8c8-43d6-b01c-7ffad814bc5c, infoPort=39321, infoSecurePort=0, ipcPort=46381, storageInfo=lv=-57;cid=testClusterID;nsid=1023407;c=1685364927147), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:55:29,188 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x338ecd78706345d7: Processing first storage report for DS-85ba99d5-3eab-45cc-9c12-0ce6ea3fd9a6 from datanode 18a4d8fc-7a15-4b8c-96b2-f86761de45fc 2023-05-29 12:55:29,188 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x338ecd78706345d7: from storage DS-85ba99d5-3eab-45cc-9c12-0ce6ea3fd9a6 node DatanodeRegistration(127.0.0.1:41803, datanodeUuid=18a4d8fc-7a15-4b8c-96b2-f86761de45fc, infoPort=42927, infoSecurePort=0, ipcPort=46145, storageInfo=lv=-57;cid=testClusterID;nsid=1023407;c=1685364927147), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:55:29,189 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6339123c4f1c7988: Processing first storage report for DS-d7173547-f056-4af9-aaea-bf95d9e359c3 from datanode 2d444b52-d8c8-43d6-b01c-7ffad814bc5c 2023-05-29 12:55:29,189 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6339123c4f1c7988: from storage DS-d7173547-f056-4af9-aaea-bf95d9e359c3 node DatanodeRegistration(127.0.0.1:34347, datanodeUuid=2d444b52-d8c8-43d6-b01c-7ffad814bc5c, infoPort=39321, infoSecurePort=0, ipcPort=46381, storageInfo=lv=-57;cid=testClusterID;nsid=1023407;c=1685364927147), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:55:29,257 DEBUG [Listener at localhost/46381] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba 2023-05-29 12:55:29,325 INFO [Listener at localhost/46381] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/zookeeper_0, clientPort=63514, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 12:55:29,339 INFO [Listener at localhost/46381] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63514 2023-05-29 12:55:29,351 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:29,353 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:30,004 INFO [Listener at localhost/46381] util.FSUtils(471): Created version file at hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d with version=8 2023-05-29 12:55:30,004 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase-staging 2023-05-29 12:55:30,296 INFO [Listener at localhost/46381] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-29 12:55:30,772 INFO [Listener at localhost/46381] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:55:30,804 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:55:30,804 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:55:30,804 INFO [Listener at localhost/46381] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:55:30,804 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:55:30,805 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:55:30,941 INFO [Listener at localhost/46381] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:55:31,011 DEBUG [Listener at localhost/46381] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-29 12:55:31,103 INFO [Listener at localhost/46381] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34525 2023-05-29 12:55:31,112 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:31,115 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:31,138 INFO [Listener at localhost/46381] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34525 connecting to ZooKeeper ensemble=127.0.0.1:63514 2023-05-29 12:55:31,189 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:345250x0, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:55:31,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34525-0x1007702761b0000 connected 2023-05-29 12:55:31,214 DEBUG [Listener at localhost/46381] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:55:31,214 DEBUG [Listener at localhost/46381] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:55:31,218 DEBUG [Listener at localhost/46381] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:55:31,225 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34525 2023-05-29 12:55:31,226 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34525 2023-05-29 12:55:31,226 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34525 2023-05-29 12:55:31,227 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34525 2023-05-29 12:55:31,227 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34525 2023-05-29 12:55:31,232 INFO [Listener at localhost/46381] master.HMaster(444): hbase.rootdir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d, hbase.cluster.distributed=false 2023-05-29 12:55:31,300 INFO [Listener at localhost/46381] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:55:31,301 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:55:31,301 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:55:31,301 INFO [Listener at localhost/46381] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:55:31,301 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:55:31,301 INFO [Listener at localhost/46381] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:55:31,306 INFO [Listener at localhost/46381] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:55:31,309 INFO [Listener at localhost/46381] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44081 2023-05-29 12:55:31,312 INFO [Listener at localhost/46381] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 12:55:31,318 DEBUG [Listener at localhost/46381] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 12:55:31,319 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:31,321 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:31,323 INFO [Listener at localhost/46381] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44081 connecting to ZooKeeper ensemble=127.0.0.1:63514 2023-05-29 12:55:31,327 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:440810x0, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:55:31,328 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44081-0x1007702761b0001 connected 2023-05-29 12:55:31,328 DEBUG [Listener at localhost/46381] zookeeper.ZKUtil(164): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:55:31,329 DEBUG [Listener at localhost/46381] zookeeper.ZKUtil(164): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:55:31,330 DEBUG [Listener at localhost/46381] zookeeper.ZKUtil(164): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:55:31,330 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44081 2023-05-29 12:55:31,331 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44081 2023-05-29 12:55:31,331 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44081 2023-05-29 12:55:31,332 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44081 2023-05-29 12:55:31,332 DEBUG [Listener at localhost/46381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44081 2023-05-29 12:55:31,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:31,342 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:55:31,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:31,363 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:55:31,363 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:55:31,363 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:31,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:55:31,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34525,1685364930140 from backup master directory 2023-05-29 12:55:31,365 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:55:31,368 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:31,368 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:55:31,369 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:55:31,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:31,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-29 12:55:31,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-29 12:55:31,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase.id with ID: d87d7851-c285-42dd-b7ac-c1dc2f4a9e60 2023-05-29 12:55:31,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:31,514 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:31,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2d12d612 to 127.0.0.1:63514 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:55:31,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b27fff9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:55:31,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:55:31,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 12:55:31,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:55:31,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store-tmp 2023-05-29 12:55:31,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:31,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:55:31,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:55:31,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:55:31,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:55:31,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:55:31,694 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:55:31,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:55:31,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/WALs/jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:31,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34525%2C1685364930140, suffix=, logDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/WALs/jenkins-hbase4.apache.org,34525,1685364930140, archiveDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/oldWALs, maxLogs=10 2023-05-29 12:55:31,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:55:31,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/WALs/jenkins-hbase4.apache.org,34525,1685364930140/jenkins-hbase4.apache.org%2C34525%2C1685364930140.1685364931735 2023-05-29 12:55:31,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:55:31,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:55:31,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:31,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:55:31,767 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:55:31,828 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:55:31,836 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 12:55:31,863 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 12:55:31,877 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:31,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:55:31,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:55:31,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:55:31,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:55:31,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=813561, jitterRate=0.034497007727622986}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:55:31,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:55:31,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 12:55:31,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 12:55:31,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 12:55:31,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 12:55:31,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-29 12:55:31,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 34 msec 2023-05-29 12:55:31,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 12:55:31,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 12:55:31,999 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 12:55:32,025 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 12:55:32,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 12:55:32,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 12:55:32,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 12:55:32,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 12:55:32,046 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:32,047 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 12:55:32,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 12:55:32,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 12:55:32,063 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:55:32,063 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:55:32,064 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:32,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34525,1685364930140, sessionid=0x1007702761b0000, setting cluster-up flag (Was=false) 2023-05-29 12:55:32,077 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:32,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 12:55:32,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:32,091 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:32,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 12:55:32,096 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:32,098 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.hbase-snapshot/.tmp 2023-05-29 12:55:32,135 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(951): ClusterId : d87d7851-c285-42dd-b7ac-c1dc2f4a9e60 2023-05-29 12:55:32,139 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 12:55:32,145 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 12:55:32,145 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 12:55:32,148 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 12:55:32,149 DEBUG [RS:0;jenkins-hbase4:44081] zookeeper.ReadOnlyZKClient(139): Connect 0x1a636be9 to 127.0.0.1:63514 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:55:32,153 DEBUG [RS:0;jenkins-hbase4:44081] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14a486d5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:55:32,154 DEBUG [RS:0;jenkins-hbase4:44081] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@109b0099, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:55:32,176 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44081 2023-05-29 12:55:32,180 INFO [RS:0;jenkins-hbase4:44081] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 12:55:32,180 INFO [RS:0;jenkins-hbase4:44081] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 12:55:32,180 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 12:55:32,182 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,34525,1685364930140 with isa=jenkins-hbase4.apache.org/172.31.14.131:44081, startcode=1685364931300 2023-05-29 12:55:32,198 DEBUG [RS:0;jenkins-hbase4:44081] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 12:55:32,205 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 12:55:32,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:55:32,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:55:32,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:55:32,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:55:32,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 12:55:32,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:55:32,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685364962220 2023-05-29 12:55:32,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 12:55:32,226 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:55:32,226 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 12:55:32,231 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:55:32,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 12:55:32,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 12:55:32,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 12:55:32,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 12:55:32,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 12:55:32,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 12:55:32,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 12:55:32,244 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 12:55:32,246 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 12:55:32,246 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 12:55:32,247 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685364932247,5,FailOnTimeoutGroup] 2023-05-29 12:55:32,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685364932247,5,FailOnTimeoutGroup] 2023-05-29 12:55:32,248 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,248 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 12:55:32,249 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,249 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,301 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:55:32,303 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:55:32,303 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d 2023-05-29 12:55:32,326 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:32,330 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:55:32,334 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/info 2023-05-29 12:55:32,335 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:55:32,337 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:32,337 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:55:32,340 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:55:32,341 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:55:32,341 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:32,342 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:55:32,344 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/table 2023-05-29 12:55:32,344 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33095, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 12:55:32,345 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:55:32,346 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:32,348 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740 2023-05-29 12:55:32,349 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740 2023-05-29 12:55:32,353 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:55:32,356 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:55:32,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,360 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:55:32,361 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=826482, jitterRate=0.05092713236808777}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:55:32,361 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:55:32,361 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:55:32,361 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:55:32,361 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:55:32,362 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:55:32,362 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:55:32,362 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:55:32,363 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:55:32,367 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:55:32,368 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 12:55:32,376 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 12:55:32,378 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d 2023-05-29 12:55:32,379 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40317 2023-05-29 12:55:32,379 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 12:55:32,385 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:55:32,386 DEBUG [RS:0;jenkins-hbase4:44081] zookeeper.ZKUtil(162): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,386 WARN [RS:0;jenkins-hbase4:44081] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:55:32,387 INFO [RS:0;jenkins-hbase4:44081] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:55:32,387 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1946): logDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,388 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44081,1685364931300] 2023-05-29 12:55:32,392 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 12:55:32,396 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 12:55:32,399 DEBUG [RS:0;jenkins-hbase4:44081] zookeeper.ZKUtil(162): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,409 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 12:55:32,417 INFO [RS:0;jenkins-hbase4:44081] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 12:55:32,435 INFO [RS:0;jenkins-hbase4:44081] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 12:55:32,438 INFO [RS:0;jenkins-hbase4:44081] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:55:32,439 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,439 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 12:55:32,447 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,447 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,448 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,448 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,448 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,448 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,449 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:55:32,449 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,449 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,449 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,449 DEBUG [RS:0;jenkins-hbase4:44081] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:55:32,450 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,450 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,450 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,470 INFO [RS:0;jenkins-hbase4:44081] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 12:55:32,472 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44081,1685364931300-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,493 INFO [RS:0;jenkins-hbase4:44081] regionserver.Replication(203): jenkins-hbase4.apache.org,44081,1685364931300 started 2023-05-29 12:55:32,493 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44081,1685364931300, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44081, sessionid=0x1007702761b0001 2023-05-29 12:55:32,493 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 12:55:32,493 DEBUG [RS:0;jenkins-hbase4:44081] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,493 DEBUG [RS:0;jenkins-hbase4:44081] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44081,1685364931300' 2023-05-29 12:55:32,493 DEBUG [RS:0;jenkins-hbase4:44081] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:55:32,494 DEBUG [RS:0;jenkins-hbase4:44081] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:55:32,495 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 12:55:32,495 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 12:55:32,495 DEBUG [RS:0;jenkins-hbase4:44081] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,495 DEBUG [RS:0;jenkins-hbase4:44081] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44081,1685364931300' 2023-05-29 12:55:32,495 DEBUG [RS:0;jenkins-hbase4:44081] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 12:55:32,495 DEBUG [RS:0;jenkins-hbase4:44081] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 12:55:32,496 DEBUG [RS:0;jenkins-hbase4:44081] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 12:55:32,496 INFO [RS:0;jenkins-hbase4:44081] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 12:55:32,496 INFO [RS:0;jenkins-hbase4:44081] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 12:55:32,547 DEBUG [jenkins-hbase4:34525] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 12:55:32,550 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44081,1685364931300, state=OPENING 2023-05-29 12:55:32,556 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 12:55:32,559 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:32,559 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:55:32,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44081,1685364931300}] 2023-05-29 12:55:32,607 INFO [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44081%2C1685364931300, suffix=, logDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300, archiveDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/oldWALs, maxLogs=32 2023-05-29 12:55:32,630 INFO [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364932610 2023-05-29 12:55:32,631 DEBUG [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:55:32,744 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:32,746 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 12:55:32,750 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33418, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 12:55:32,762 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 12:55:32,763 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:55:32,766 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44081%2C1685364931300.meta, suffix=.meta, logDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300, archiveDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/oldWALs, maxLogs=32 2023-05-29 12:55:32,780 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.meta.1685364932768.meta 2023-05-29 12:55:32,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK], DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK]] 2023-05-29 12:55:32,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:55:32,782 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 12:55:32,798 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 12:55:32,802 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 12:55:32,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 12:55:32,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:32,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 12:55:32,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 12:55:32,810 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:55:32,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/info 2023-05-29 12:55:32,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/info 2023-05-29 12:55:32,812 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:55:32,813 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:32,813 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:55:32,815 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:55:32,815 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:55:32,815 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:55:32,816 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:32,816 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:55:32,818 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/table 2023-05-29 12:55:32,818 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/table 2023-05-29 12:55:32,818 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:55:32,819 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:32,821 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740 2023-05-29 12:55:32,823 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740 2023-05-29 12:55:32,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:55:32,829 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:55:32,830 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727992, jitterRate=-0.07431074976921082}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:55:32,830 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:55:32,840 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685364932737 2023-05-29 12:55:32,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 12:55:32,858 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 12:55:32,858 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44081,1685364931300, state=OPEN 2023-05-29 12:55:32,861 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 12:55:32,861 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:55:32,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 12:55:32,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44081,1685364931300 in 299 msec 2023-05-29 12:55:32,871 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 12:55:32,871 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 492 msec 2023-05-29 12:55:32,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 734 msec 2023-05-29 12:55:32,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685364932877, completionTime=-1 2023-05-29 12:55:32,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 12:55:32,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 12:55:32,936 DEBUG [hconnection-0x3309a0ac-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:55:32,938 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33430, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:55:32,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 12:55:32,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685364992955 2023-05-29 12:55:32,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685365052955 2023-05-29 12:55:32,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 77 msec 2023-05-29 12:55:32,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34525,1685364930140-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34525,1685364930140-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34525,1685364930140-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34525, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 12:55:32,986 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 12:55:32,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 12:55:32,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:55:33,005 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 12:55:33,007 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:55:33,009 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:55:33,028 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,031 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098 empty. 2023-05-29 12:55:33,031 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,031 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 12:55:33,092 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 12:55:33,094 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8842fbd4618c95a6a38eb896f3e2d098, NAME => 'hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp 2023-05-29 12:55:33,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:33,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8842fbd4618c95a6a38eb896f3e2d098, disabling compactions & flushes 2023-05-29 12:55:33,109 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. after waiting 0 ms 2023-05-29 12:55:33,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,109 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,109 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8842fbd4618c95a6a38eb896f3e2d098: 2023-05-29 12:55:33,114 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:55:33,129 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685364933117"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685364933117"}]},"ts":"1685364933117"} 2023-05-29 12:55:33,155 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:55:33,157 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:55:33,161 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685364933157"}]},"ts":"1685364933157"} 2023-05-29 12:55:33,165 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 12:55:33,174 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8842fbd4618c95a6a38eb896f3e2d098, ASSIGN}] 2023-05-29 12:55:33,176 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8842fbd4618c95a6a38eb896f3e2d098, ASSIGN 2023-05-29 12:55:33,178 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8842fbd4618c95a6a38eb896f3e2d098, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44081,1685364931300; forceNewPlan=false, retain=false 2023-05-29 12:55:33,329 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8842fbd4618c95a6a38eb896f3e2d098, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:33,329 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685364933329"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685364933329"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685364933329"}]},"ts":"1685364933329"} 2023-05-29 12:55:33,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8842fbd4618c95a6a38eb896f3e2d098, server=jenkins-hbase4.apache.org,44081,1685364931300}] 2023-05-29 12:55:33,494 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8842fbd4618c95a6a38eb896f3e2d098, NAME => 'hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:55:33,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:33,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,499 INFO [StoreOpener-8842fbd4618c95a6a38eb896f3e2d098-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,502 DEBUG [StoreOpener-8842fbd4618c95a6a38eb896f3e2d098-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/info 2023-05-29 12:55:33,502 DEBUG [StoreOpener-8842fbd4618c95a6a38eb896f3e2d098-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/info 2023-05-29 12:55:33,502 INFO [StoreOpener-8842fbd4618c95a6a38eb896f3e2d098-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8842fbd4618c95a6a38eb896f3e2d098 columnFamilyName info 2023-05-29 12:55:33,503 INFO [StoreOpener-8842fbd4618c95a6a38eb896f3e2d098-1] regionserver.HStore(310): Store=8842fbd4618c95a6a38eb896f3e2d098/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:33,504 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,505 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,510 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:55:33,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:55:33,514 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8842fbd4618c95a6a38eb896f3e2d098; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=765514, jitterRate=-0.026599720120429993}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:55:33,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8842fbd4618c95a6a38eb896f3e2d098: 2023-05-29 12:55:33,516 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098., pid=6, masterSystemTime=1685364933487 2023-05-29 12:55:33,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,521 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:55:33,522 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8842fbd4618c95a6a38eb896f3e2d098, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:33,523 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685364933521"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685364933521"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685364933521"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685364933521"}]},"ts":"1685364933521"} 2023-05-29 12:55:33,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 12:55:33,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8842fbd4618c95a6a38eb896f3e2d098, server=jenkins-hbase4.apache.org,44081,1685364931300 in 193 msec 2023-05-29 12:55:33,534 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 12:55:33,534 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8842fbd4618c95a6a38eb896f3e2d098, ASSIGN in 356 msec 2023-05-29 12:55:33,535 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:55:33,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685364933536"}]},"ts":"1685364933536"} 2023-05-29 12:55:33,539 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 12:55:33,542 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:55:33,545 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 546 msec 2023-05-29 12:55:33,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 12:55:33,610 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:55:33,610 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:33,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 12:55:33,665 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:55:33,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 32 msec 2023-05-29 12:55:33,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 12:55:33,694 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:55:33,700 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-05-29 12:55:33,708 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 12:55:33,711 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 12:55:33,712 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.343sec 2023-05-29 12:55:33,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 12:55:33,716 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 12:55:33,716 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 12:55:33,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34525,1685364930140-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 12:55:33,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34525,1685364930140-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 12:55:33,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 12:55:33,741 DEBUG [Listener at localhost/46381] zookeeper.ReadOnlyZKClient(139): Connect 0x2741d125 to 127.0.0.1:63514 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:55:33,745 DEBUG [Listener at localhost/46381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@553f00ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:55:33,758 DEBUG [hconnection-0x243f804e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:55:33,773 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33442, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:55:33,783 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:55:33,783 INFO [Listener at localhost/46381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:55:33,793 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 12:55:33,793 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:55:33,794 INFO [Listener at localhost/46381] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 12:55:33,803 DEBUG [Listener at localhost/46381] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 12:55:33,806 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 12:55:33,816 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 12:55:33,816 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 12:55:33,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:55:33,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-29 12:55:33,824 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:55:33,826 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:55:33,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-29 12:55:33,829 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:33,831 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34 empty. 2023-05-29 12:55:33,833 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:33,833 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-29 12:55:33,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:55:33,856 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-29 12:55:33,858 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8c130ccb95a7e29d957774ad2fa60f34, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/.tmp 2023-05-29 12:55:33,871 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:33,871 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 8c130ccb95a7e29d957774ad2fa60f34, disabling compactions & flushes 2023-05-29 12:55:33,871 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:33,871 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:33,871 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. after waiting 0 ms 2023-05-29 12:55:33,871 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:33,871 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:33,871 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:55:33,875 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:55:33,877 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685364933877"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685364933877"}]},"ts":"1685364933877"} 2023-05-29 12:55:33,880 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:55:33,882 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:55:33,882 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685364933882"}]},"ts":"1685364933882"} 2023-05-29 12:55:33,884 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-29 12:55:33,889 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=8c130ccb95a7e29d957774ad2fa60f34, ASSIGN}] 2023-05-29 12:55:33,891 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=8c130ccb95a7e29d957774ad2fa60f34, ASSIGN 2023-05-29 12:55:33,892 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=8c130ccb95a7e29d957774ad2fa60f34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44081,1685364931300; forceNewPlan=false, retain=false 2023-05-29 12:55:34,043 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8c130ccb95a7e29d957774ad2fa60f34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:34,044 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685364934043"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685364934043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685364934043"}]},"ts":"1685364934043"} 2023-05-29 12:55:34,047 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 8c130ccb95a7e29d957774ad2fa60f34, server=jenkins-hbase4.apache.org,44081,1685364931300}] 2023-05-29 12:55:34,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:34,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8c130ccb95a7e29d957774ad2fa60f34, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:55:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:55:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,210 INFO [StoreOpener-8c130ccb95a7e29d957774ad2fa60f34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,212 DEBUG [StoreOpener-8c130ccb95a7e29d957774ad2fa60f34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info 2023-05-29 12:55:34,212 DEBUG [StoreOpener-8c130ccb95a7e29d957774ad2fa60f34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info 2023-05-29 12:55:34,213 INFO [StoreOpener-8c130ccb95a7e29d957774ad2fa60f34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8c130ccb95a7e29d957774ad2fa60f34 columnFamilyName info 2023-05-29 12:55:34,214 INFO [StoreOpener-8c130ccb95a7e29d957774ad2fa60f34-1] regionserver.HStore(310): Store=8c130ccb95a7e29d957774ad2fa60f34/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:55:34,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:34,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:55:34,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8c130ccb95a7e29d957774ad2fa60f34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=798599, jitterRate=0.015471741557121277}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:55:34,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:55:34,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34., pid=11, masterSystemTime=1685364934201 2023-05-29 12:55:34,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:34,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:34,230 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8c130ccb95a7e29d957774ad2fa60f34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:55:34,230 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685364934230"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685364934230"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685364934230"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685364934230"}]},"ts":"1685364934230"} 2023-05-29 12:55:34,237 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 12:55:34,237 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 8c130ccb95a7e29d957774ad2fa60f34, server=jenkins-hbase4.apache.org,44081,1685364931300 in 186 msec 2023-05-29 12:55:34,241 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 12:55:34,241 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=8c130ccb95a7e29d957774ad2fa60f34, ASSIGN in 348 msec 2023-05-29 12:55:34,242 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:55:34,243 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685364934242"}]},"ts":"1685364934242"} 2023-05-29 12:55:34,245 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-29 12:55:34,248 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:55:34,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 428 msec 2023-05-29 12:55:38,317 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-29 12:55:38,415 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 12:55:38,417 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 12:55:38,418 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-29 12:55:40,292 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 12:55:40,293 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-29 12:55:43,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34525] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:55:43,847 INFO [Listener at localhost/46381] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-29 12:55:43,851 DEBUG [Listener at localhost/46381] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-29 12:55:43,852 DEBUG [Listener at localhost/46381] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:55:55,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44081] regionserver.HRegion(9158): Flush requested on 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:55:55,909 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8c130ccb95a7e29d957774ad2fa60f34 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:55:55,975 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/772702ce91f84883a19cfc9c9371e00a 2023-05-29 12:55:56,020 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/772702ce91f84883a19cfc9c9371e00a as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a 2023-05-29 12:55:56,030 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a, entries=7, sequenceid=11, filesize=12.1 K 2023-05-29 12:55:56,033 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 8c130ccb95a7e29d957774ad2fa60f34 in 125ms, sequenceid=11, compaction requested=false 2023-05-29 12:55:56,034 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:56:04,119 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:06,323 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:08,526 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:10,728 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:10,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44081] regionserver.HRegion(9158): Flush requested on 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:56:10,729 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8c130ccb95a7e29d957774ad2fa60f34 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:56:10,930 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:10,948 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/e9fd5676f4e64e90baa791260354f488 2023-05-29 12:56:10,955 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/e9fd5676f4e64e90baa791260354f488 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/e9fd5676f4e64e90baa791260354f488 2023-05-29 12:56:10,964 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/e9fd5676f4e64e90baa791260354f488, entries=7, sequenceid=21, filesize=12.1 K 2023-05-29 12:56:11,165 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:11,166 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 8c130ccb95a7e29d957774ad2fa60f34 in 436ms, sequenceid=21, compaction requested=false 2023-05-29 12:56:11,166 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:56:11,166 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-29 12:56:11,167 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:56:11,168 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a because midkey is the same as first or last row 2023-05-29 12:56:12,931 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:15,134 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:15,135 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44081%2C1685364931300:(num 1685364932610) roll requested 2023-05-29 12:56:15,135 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:15,347 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:15,348 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364932610 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364975135 2023-05-29 12:56:15,349 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:15,349 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364932610 is not closed yet, will try archiving it next time 2023-05-29 12:56:25,147 INFO [Listener at localhost/46381] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-29 12:56:30,149 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:30,149 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:30,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44081] regionserver.HRegion(9158): Flush requested on 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:56:30,149 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44081%2C1685364931300:(num 1685364975135) roll requested 2023-05-29 12:56:30,150 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8c130ccb95a7e29d957774ad2fa60f34 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:56:32,150 INFO [Listener at localhost/46381] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-29 12:56:35,151 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:35,151 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:35,163 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:35,163 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK], DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK]] 2023-05-29 12:56:35,165 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364975135 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364990150 2023-05-29 12:56:35,165 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41803,DS-8b6e87ed-2b6c-4cf8-aae5-50070e2d7453,DISK], DatanodeInfoWithStorage[127.0.0.1:34347,DS-42a060da-f71b-4bdc-b552-7803dc51a3ec,DISK]] 2023-05-29 12:56:35,165 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300/jenkins-hbase4.apache.org%2C44081%2C1685364931300.1685364975135 is not closed yet, will try archiving it next time 2023-05-29 12:56:35,179 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/1765737c43bd417d9711ce7518d8d665 2023-05-29 12:56:35,188 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/1765737c43bd417d9711ce7518d8d665 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/1765737c43bd417d9711ce7518d8d665 2023-05-29 12:56:35,198 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/1765737c43bd417d9711ce7518d8d665, entries=7, sequenceid=31, filesize=12.1 K 2023-05-29 12:56:35,200 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 8c130ccb95a7e29d957774ad2fa60f34 in 5051ms, sequenceid=31, compaction requested=true 2023-05-29 12:56:35,201 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:56:35,201 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-29 12:56:35,201 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:56:35,201 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a because midkey is the same as first or last row 2023-05-29 12:56:35,206 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 12:56:35,206 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 12:56:35,211 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 12:56:35,213 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.HStore(1912): 8c130ccb95a7e29d957774ad2fa60f34/info is initiating minor compaction (all files) 2023-05-29 12:56:35,213 INFO [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 8c130ccb95a7e29d957774ad2fa60f34/info in TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:56:35,214 INFO [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/e9fd5676f4e64e90baa791260354f488, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/1765737c43bd417d9711ce7518d8d665] into tmpdir=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp, totalSize=36.3 K 2023-05-29 12:56:35,215 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] compactions.Compactor(207): Compacting 772702ce91f84883a19cfc9c9371e00a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685364943857 2023-05-29 12:56:35,216 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] compactions.Compactor(207): Compacting e9fd5676f4e64e90baa791260354f488, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685364957909 2023-05-29 12:56:35,216 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] compactions.Compactor(207): Compacting 1765737c43bd417d9711ce7518d8d665, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685364972730 2023-05-29 12:56:35,257 INFO [RS:0;jenkins-hbase4:44081-shortCompactions-0] throttle.PressureAwareThroughputController(145): 8c130ccb95a7e29d957774ad2fa60f34#info#compaction#3 average throughput is 5.39 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 12:56:35,290 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/4ee1f4d9c3c04370b2cab792ddc61891 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/4ee1f4d9c3c04370b2cab792ddc61891 2023-05-29 12:56:35,308 INFO [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 8c130ccb95a7e29d957774ad2fa60f34/info of 8c130ccb95a7e29d957774ad2fa60f34 into 4ee1f4d9c3c04370b2cab792ddc61891(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 12:56:35,309 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:56:35,309 INFO [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34., storeName=8c130ccb95a7e29d957774ad2fa60f34/info, priority=13, startTime=1685364995203; duration=0sec 2023-05-29 12:56:35,310 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-29 12:56:35,310 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:56:35,310 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/4ee1f4d9c3c04370b2cab792ddc61891 because midkey is the same as first or last row 2023-05-29 12:56:35,311 DEBUG [RS:0;jenkins-hbase4:44081-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 12:56:47,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44081] regionserver.HRegion(9158): Flush requested on 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:56:47,273 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8c130ccb95a7e29d957774ad2fa60f34 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:56:47,294 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/d23f2da6d2ca43baab656d89fe4e28c1 2023-05-29 12:56:47,304 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/d23f2da6d2ca43baab656d89fe4e28c1 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/d23f2da6d2ca43baab656d89fe4e28c1 2023-05-29 12:56:47,312 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/d23f2da6d2ca43baab656d89fe4e28c1, entries=7, sequenceid=42, filesize=12.1 K 2023-05-29 12:56:47,314 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 8c130ccb95a7e29d957774ad2fa60f34 in 41ms, sequenceid=42, compaction requested=false 2023-05-29 12:56:47,314 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:56:47,314 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-29 12:56:47,314 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:56:47,314 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/4ee1f4d9c3c04370b2cab792ddc61891 because midkey is the same as first or last row 2023-05-29 12:56:55,282 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 12:56:55,282 INFO [Listener at localhost/46381] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 12:56:55,283 DEBUG [Listener at localhost/46381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2741d125 to 127.0.0.1:63514 2023-05-29 12:56:55,283 DEBUG [Listener at localhost/46381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:56:55,284 DEBUG [Listener at localhost/46381] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 12:56:55,284 DEBUG [Listener at localhost/46381] util.JVMClusterUtil(257): Found active master hash=63271835, stopped=false 2023-05-29 12:56:55,284 INFO [Listener at localhost/46381] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:56:55,286 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:56:55,286 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:56:55,286 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:55,286 INFO [Listener at localhost/46381] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 12:56:55,287 DEBUG [Listener at localhost/46381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2d12d612 to 127.0.0.1:63514 2023-05-29 12:56:55,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:56:55,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:56:55,288 DEBUG [Listener at localhost/46381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:56:55,288 INFO [Listener at localhost/46381] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44081,1685364931300' ***** 2023-05-29 12:56:55,288 INFO [Listener at localhost/46381] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 12:56:55,289 INFO [RS:0;jenkins-hbase4:44081] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 12:56:55,289 INFO [RS:0;jenkins-hbase4:44081] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 12:56:55,289 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 12:56:55,289 INFO [RS:0;jenkins-hbase4:44081] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 12:56:55,289 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(3303): Received CLOSE for 8842fbd4618c95a6a38eb896f3e2d098 2023-05-29 12:56:55,290 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(3303): Received CLOSE for 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:56:55,290 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:56:55,290 DEBUG [RS:0;jenkins-hbase4:44081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a636be9 to 127.0.0.1:63514 2023-05-29 12:56:55,291 DEBUG [RS:0;jenkins-hbase4:44081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:56:55,291 INFO [RS:0;jenkins-hbase4:44081] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 12:56:55,291 INFO [RS:0;jenkins-hbase4:44081] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 12:56:55,291 INFO [RS:0;jenkins-hbase4:44081] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 12:56:55,291 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 12:56:55,291 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 12:56:55,291 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 8842fbd4618c95a6a38eb896f3e2d098=hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098., 8c130ccb95a7e29d957774ad2fa60f34=TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.} 2023-05-29 12:56:55,293 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1504): Waiting on 1588230740, 8842fbd4618c95a6a38eb896f3e2d098, 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:56:55,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8842fbd4618c95a6a38eb896f3e2d098, disabling compactions & flushes 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:56:55,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:56:55,296 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. after waiting 0 ms 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:56:55,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8842fbd4618c95a6a38eb896f3e2d098 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 12:56:55,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:56:55,296 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-29 12:56:55,358 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/.tmp/info/67cab92c4c694a4cb4ff0e9be0854363 2023-05-29 12:56:55,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/.tmp/info/d3c91e538a074ce3b3b82545d7584cb2 2023-05-29 12:56:55,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/.tmp/info/d3c91e538a074ce3b3b82545d7584cb2 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/info/d3c91e538a074ce3b3b82545d7584cb2 2023-05-29 12:56:55,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/info/d3c91e538a074ce3b3b82545d7584cb2, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 12:56:55,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 8842fbd4618c95a6a38eb896f3e2d098 in 115ms, sequenceid=6, compaction requested=false 2023-05-29 12:56:55,430 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/.tmp/table/82c43fbfcd5f41f4af7bb60458ca40b2 2023-05-29 12:56:55,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/namespace/8842fbd4618c95a6a38eb896f3e2d098/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 12:56:55,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:56:55,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8842fbd4618c95a6a38eb896f3e2d098: 2023-05-29 12:56:55,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685364932995.8842fbd4618c95a6a38eb896f3e2d098. 2023-05-29 12:56:55,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8c130ccb95a7e29d957774ad2fa60f34, disabling compactions & flushes 2023-05-29 12:56:55,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:56:55,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:56:55,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. after waiting 0 ms 2023-05-29 12:56:55,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:56:55,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8c130ccb95a7e29d957774ad2fa60f34 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-29 12:56:55,443 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/.tmp/info/67cab92c4c694a4cb4ff0e9be0854363 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/info/67cab92c4c694a4cb4ff0e9be0854363 2023-05-29 12:56:55,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/info/67cab92c4c694a4cb4ff0e9be0854363, entries=20, sequenceid=14, filesize=7.4 K 2023-05-29 12:56:55,457 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/.tmp/table/82c43fbfcd5f41f4af7bb60458ca40b2 as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/table/82c43fbfcd5f41f4af7bb60458ca40b2 2023-05-29 12:56:55,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/cbcb277f9a0f48f69ff127e7ed2e4e2c 2023-05-29 12:56:55,468 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/table/82c43fbfcd5f41f4af7bb60458ca40b2, entries=4, sequenceid=14, filesize=4.8 K 2023-05-29 12:56:55,469 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 173ms, sequenceid=14, compaction requested=false 2023-05-29 12:56:55,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/.tmp/info/cbcb277f9a0f48f69ff127e7ed2e4e2c as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/cbcb277f9a0f48f69ff127e7ed2e4e2c 2023-05-29 12:56:55,475 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-29 12:56:55,479 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-29 12:56:55,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/cbcb277f9a0f48f69ff127e7ed2e4e2c, entries=3, sequenceid=48, filesize=7.9 K 2023-05-29 12:56:55,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 8c130ccb95a7e29d957774ad2fa60f34 in 53ms, sequenceid=48, compaction requested=true 2023-05-29 12:56:55,491 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-29 12:56:55,491 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/e9fd5676f4e64e90baa791260354f488, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/1765737c43bd417d9711ce7518d8d665] to archive 2023-05-29 12:56:55,492 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 12:56:55,493 DEBUG [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1504): Waiting on 1588230740, 8c130ccb95a7e29d957774ad2fa60f34 2023-05-29 12:56:55,494 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 12:56:55,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:56:55,496 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:56:55,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 12:56:55,505 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/archive/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/772702ce91f84883a19cfc9c9371e00a 2023-05-29 12:56:55,507 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/e9fd5676f4e64e90baa791260354f488 to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/archive/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/e9fd5676f4e64e90baa791260354f488 2023-05-29 12:56:55,509 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/1765737c43bd417d9711ce7518d8d665 to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/archive/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/info/1765737c43bd417d9711ce7518d8d665 2023-05-29 12:56:55,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/data/default/TestLogRolling-testSlowSyncLogRolling/8c130ccb95a7e29d957774ad2fa60f34/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-29 12:56:55,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:56:55,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8c130ccb95a7e29d957774ad2fa60f34: 2023-05-29 12:56:55,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685364933815.8c130ccb95a7e29d957774ad2fa60f34. 2023-05-29 12:56:55,694 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44081,1685364931300; all regions closed. 2023-05-29 12:56:55,695 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:56:55,705 DEBUG [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/oldWALs 2023-05-29 12:56:55,705 INFO [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C44081%2C1685364931300.meta:.meta(num 1685364932768) 2023-05-29 12:56:55,705 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/WALs/jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:56:55,715 DEBUG [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/oldWALs 2023-05-29 12:56:55,716 INFO [RS:0;jenkins-hbase4:44081] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C44081%2C1685364931300:(num 1685364990150) 2023-05-29 12:56:55,716 DEBUG [RS:0;jenkins-hbase4:44081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:56:55,716 INFO [RS:0;jenkins-hbase4:44081] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:56:55,716 INFO [RS:0;jenkins-hbase4:44081] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 12:56:55,716 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:56:55,717 INFO [RS:0;jenkins-hbase4:44081] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44081 2023-05-29 12:56:55,723 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:56:55,723 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44081,1685364931300 2023-05-29 12:56:55,723 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:56:55,724 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44081,1685364931300] 2023-05-29 12:56:55,724 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44081,1685364931300; numProcessing=1 2023-05-29 12:56:55,730 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44081,1685364931300 already deleted, retry=false 2023-05-29 12:56:55,730 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44081,1685364931300 expired; onlineServers=0 2023-05-29 12:56:55,730 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34525,1685364930140' ***** 2023-05-29 12:56:55,730 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 12:56:55,730 DEBUG [M:0;jenkins-hbase4:34525] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42f879a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:56:55,730 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:56:55,730 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34525,1685364930140; all regions closed. 2023-05-29 12:56:55,730 DEBUG [M:0;jenkins-hbase4:34525] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:56:55,731 DEBUG [M:0;jenkins-hbase4:34525] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 12:56:55,731 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 12:56:55,731 DEBUG [M:0;jenkins-hbase4:34525] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 12:56:55,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685364932247] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685364932247,5,FailOnTimeoutGroup] 2023-05-29 12:56:55,732 INFO [M:0;jenkins-hbase4:34525] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 12:56:55,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685364932247] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685364932247,5,FailOnTimeoutGroup] 2023-05-29 12:56:55,732 INFO [M:0;jenkins-hbase4:34525] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 12:56:55,733 INFO [M:0;jenkins-hbase4:34525] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 12:56:55,733 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 12:56:55,733 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:55,733 DEBUG [M:0;jenkins-hbase4:34525] master.HMaster(1512): Stopping service threads 2023-05-29 12:56:55,733 INFO [M:0;jenkins-hbase4:34525] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 12:56:55,734 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:56:55,734 INFO [M:0;jenkins-hbase4:34525] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 12:56:55,734 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 12:56:55,735 DEBUG [M:0;jenkins-hbase4:34525] zookeeper.ZKUtil(398): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 12:56:55,735 WARN [M:0;jenkins-hbase4:34525] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 12:56:55,735 INFO [M:0;jenkins-hbase4:34525] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 12:56:55,735 INFO [M:0;jenkins-hbase4:34525] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 12:56:55,736 DEBUG [M:0;jenkins-hbase4:34525] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:56:55,736 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:55,736 DEBUG [M:0;jenkins-hbase4:34525] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:55,736 DEBUG [M:0;jenkins-hbase4:34525] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:56:55,736 DEBUG [M:0;jenkins-hbase4:34525] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:55,736 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.29 KB heapSize=46.71 KB 2023-05-29 12:56:55,752 INFO [M:0;jenkins-hbase4:34525] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.29 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9f1f48990df5441eb15a45ed6d8c8a0b 2023-05-29 12:56:55,758 INFO [M:0;jenkins-hbase4:34525] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f1f48990df5441eb15a45ed6d8c8a0b 2023-05-29 12:56:55,759 DEBUG [M:0;jenkins-hbase4:34525] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9f1f48990df5441eb15a45ed6d8c8a0b as hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9f1f48990df5441eb15a45ed6d8c8a0b 2023-05-29 12:56:55,765 INFO [M:0;jenkins-hbase4:34525] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f1f48990df5441eb15a45ed6d8c8a0b 2023-05-29 12:56:55,766 INFO [M:0;jenkins-hbase4:34525] regionserver.HStore(1080): Added hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9f1f48990df5441eb15a45ed6d8c8a0b, entries=11, sequenceid=100, filesize=6.1 K 2023-05-29 12:56:55,767 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegion(2948): Finished flush of dataSize ~38.29 KB/39208, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=100, compaction requested=false 2023-05-29 12:56:55,768 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:55,768 DEBUG [M:0;jenkins-hbase4:34525] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:56:55,768 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/MasterData/WALs/jenkins-hbase4.apache.org,34525,1685364930140 2023-05-29 12:56:55,772 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:56:55,772 INFO [M:0;jenkins-hbase4:34525] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 12:56:55,773 INFO [M:0;jenkins-hbase4:34525] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34525 2023-05-29 12:56:55,775 DEBUG [M:0;jenkins-hbase4:34525] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34525,1685364930140 already deleted, retry=false 2023-05-29 12:56:55,829 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:56:55,829 INFO [RS:0;jenkins-hbase4:44081] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44081,1685364931300; zookeeper connection closed. 2023-05-29 12:56:55,829 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): regionserver:44081-0x1007702761b0001, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:56:55,830 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@35129d88] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@35129d88 2023-05-29 12:56:55,830 INFO [Listener at localhost/46381] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 12:56:55,929 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:56:55,929 INFO [M:0;jenkins-hbase4:34525] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34525,1685364930140; zookeeper connection closed. 2023-05-29 12:56:55,930 DEBUG [Listener at localhost/46381-EventThread] zookeeper.ZKWatcher(600): master:34525-0x1007702761b0000, quorum=127.0.0.1:63514, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:56:55,931 WARN [Listener at localhost/46381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:56:55,934 INFO [Listener at localhost/46381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:56:56,039 WARN [BP-976256191-172.31.14.131-1685364927147 heartbeating to localhost/127.0.0.1:40317] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:56:56,039 WARN [BP-976256191-172.31.14.131-1685364927147 heartbeating to localhost/127.0.0.1:40317] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-976256191-172.31.14.131-1685364927147 (Datanode Uuid 2d444b52-d8c8-43d6-b01c-7ffad814bc5c) service to localhost/127.0.0.1:40317 2023-05-29 12:56:56,041 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/dfs/data/data3/current/BP-976256191-172.31.14.131-1685364927147] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:56:56,041 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/dfs/data/data4/current/BP-976256191-172.31.14.131-1685364927147] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:56:56,042 WARN [Listener at localhost/46381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:56:56,044 INFO [Listener at localhost/46381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:56:56,137 WARN [BP-976256191-172.31.14.131-1685364927147 heartbeating to localhost/127.0.0.1:40317] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-976256191-172.31.14.131-1685364927147 (Datanode Uuid 18a4d8fc-7a15-4b8c-96b2-f86761de45fc) service to localhost/127.0.0.1:40317 2023-05-29 12:56:56,138 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/dfs/data/data1/current/BP-976256191-172.31.14.131-1685364927147] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:56:56,138 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/cluster_61ec3c7c-8513-c544-5107-712bca53b89c/dfs/data/data2/current/BP-976256191-172.31.14.131-1685364927147] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:56:56,182 INFO [Listener at localhost/46381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:56:56,300 INFO [Listener at localhost/46381] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 12:56:56,353 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 12:56:56,369 INFO [Listener at localhost/46381] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:40317 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46381 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:40317 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@6286cc97 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:40317 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40317 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:40317 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=442 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=87 (was 206), ProcessCount=168 (was 168), AvailableMemoryMB=3973 (was 4552) 2023-05-29 12:56:56,382 INFO [Listener at localhost/46381] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=442, MaxFileDescriptor=60000, SystemLoadAverage=87, ProcessCount=168, AvailableMemoryMB=3973 2023-05-29 12:56:56,383 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 12:56:56,383 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/hadoop.log.dir so I do NOT create it in target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e 2023-05-29 12:56:56,383 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b8767a56-3d6a-e855-ee41-1fa2637eebba/hadoop.tmp.dir so I do NOT create it in target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e 2023-05-29 12:56:56,383 INFO [Listener at localhost/46381] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07, deleteOnExit=true 2023-05-29 12:56:56,383 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 12:56:56,384 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/test.cache.data in system properties and HBase conf 2023-05-29 12:56:56,384 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 12:56:56,384 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/hadoop.log.dir in system properties and HBase conf 2023-05-29 12:56:56,384 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 12:56:56,384 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 12:56:56,385 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 12:56:56,385 DEBUG [Listener at localhost/46381] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 12:56:56,385 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:56:56,386 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:56:56,386 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 12:56:56,386 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:56:56,386 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 12:56:56,386 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 12:56:56,387 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:56:56,387 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:56:56,387 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 12:56:56,387 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/nfs.dump.dir in system properties and HBase conf 2023-05-29 12:56:56,387 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir in system properties and HBase conf 2023-05-29 12:56:56,387 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:56:56,388 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 12:56:56,388 INFO [Listener at localhost/46381] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 12:56:56,390 WARN [Listener at localhost/46381] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:56:56,393 WARN [Listener at localhost/46381] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:56:56,393 WARN [Listener at localhost/46381] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:56:56,446 WARN [Listener at localhost/46381] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:56:56,450 INFO [Listener at localhost/46381] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:56:56,457 INFO [Listener at localhost/46381] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_39653_hdfs____.2ituso/webapp 2023-05-29 12:56:56,458 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:56:56,580 INFO [Listener at localhost/46381] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39653 2023-05-29 12:56:56,581 WARN [Listener at localhost/46381] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:56:56,584 WARN [Listener at localhost/46381] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:56:56,584 WARN [Listener at localhost/46381] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:56:56,627 WARN [Listener at localhost/39485] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:56:56,637 WARN [Listener at localhost/39485] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:56:56,640 WARN [Listener at localhost/39485] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:56:56,641 INFO [Listener at localhost/39485] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:56:56,645 INFO [Listener at localhost/39485] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_34007_datanode____.5y7pb7/webapp 2023-05-29 12:56:56,758 INFO [Listener at localhost/39485] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34007 2023-05-29 12:56:56,770 WARN [Listener at localhost/45937] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:56:56,792 WARN [Listener at localhost/45937] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:56:56,794 WARN [Listener at localhost/45937] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:56:56,795 INFO [Listener at localhost/45937] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:56:56,799 INFO [Listener at localhost/45937] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_34565_datanode____77um1m/webapp 2023-05-29 12:56:56,887 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x71c7650532ee1bd5: Processing first storage report for DS-1b911015-e18e-4414-baac-718b24225b9f from datanode ea7f5168-7fa6-4973-a1fc-02e009c04a5b 2023-05-29 12:56:56,887 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x71c7650532ee1bd5: from storage DS-1b911015-e18e-4414-baac-718b24225b9f node DatanodeRegistration(127.0.0.1:34757, datanodeUuid=ea7f5168-7fa6-4973-a1fc-02e009c04a5b, infoPort=45071, infoSecurePort=0, ipcPort=45937, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:56:56,887 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x71c7650532ee1bd5: Processing first storage report for DS-5691426f-08db-4e1e-abc5-a2f2e15d5726 from datanode ea7f5168-7fa6-4973-a1fc-02e009c04a5b 2023-05-29 12:56:56,887 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x71c7650532ee1bd5: from storage DS-5691426f-08db-4e1e-abc5-a2f2e15d5726 node DatanodeRegistration(127.0.0.1:34757, datanodeUuid=ea7f5168-7fa6-4973-a1fc-02e009c04a5b, infoPort=45071, infoSecurePort=0, ipcPort=45937, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:56:56,902 INFO [Listener at localhost/45937] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34565 2023-05-29 12:56:56,910 WARN [Listener at localhost/37799] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:56:57,012 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8a849889365a5930: Processing first storage report for DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf from datanode 6191f094-b773-4427-9636-a59e1a64a84c 2023-05-29 12:56:57,013 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8a849889365a5930: from storage DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf node DatanodeRegistration(127.0.0.1:33483, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=34987, infoSecurePort=0, ipcPort=37799, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:56:57,013 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8a849889365a5930: Processing first storage report for DS-70f2fe99-e1ca-4072-9aaa-58c52aa2379b from datanode 6191f094-b773-4427-9636-a59e1a64a84c 2023-05-29 12:56:57,013 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8a849889365a5930: from storage DS-70f2fe99-e1ca-4072-9aaa-58c52aa2379b node DatanodeRegistration(127.0.0.1:33483, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=34987, infoSecurePort=0, ipcPort=37799, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:56:57,019 DEBUG [Listener at localhost/37799] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e 2023-05-29 12:56:57,022 INFO [Listener at localhost/37799] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/zookeeper_0, clientPort=51115, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 12:56:57,027 INFO [Listener at localhost/37799] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51115 2023-05-29 12:56:57,027 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,028 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,046 INFO [Listener at localhost/37799] util.FSUtils(471): Created version file at hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124 with version=8 2023-05-29 12:56:57,046 INFO [Listener at localhost/37799] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase-staging 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:56:57,048 INFO [Listener at localhost/37799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:56:57,049 INFO [Listener at localhost/37799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35997 2023-05-29 12:56:57,050 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,051 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,052 INFO [Listener at localhost/37799] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35997 connecting to ZooKeeper ensemble=127.0.0.1:51115 2023-05-29 12:56:57,060 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:359970x0, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:56:57,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35997-0x1007703ccc10000 connected 2023-05-29 12:56:57,077 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:56:57,077 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:56:57,078 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:56:57,078 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35997 2023-05-29 12:56:57,078 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35997 2023-05-29 12:56:57,079 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35997 2023-05-29 12:56:57,080 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35997 2023-05-29 12:56:57,081 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35997 2023-05-29 12:56:57,081 INFO [Listener at localhost/37799] master.HMaster(444): hbase.rootdir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124, hbase.cluster.distributed=false 2023-05-29 12:56:57,095 INFO [Listener at localhost/37799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:56:57,096 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:57,096 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:57,096 INFO [Listener at localhost/37799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:56:57,096 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:57,096 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:56:57,096 INFO [Listener at localhost/37799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:56:57,098 INFO [Listener at localhost/37799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46077 2023-05-29 12:56:57,098 INFO [Listener at localhost/37799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 12:56:57,099 DEBUG [Listener at localhost/37799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 12:56:57,100 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,102 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,103 INFO [Listener at localhost/37799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46077 connecting to ZooKeeper ensemble=127.0.0.1:51115 2023-05-29 12:56:57,106 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:460770x0, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:56:57,107 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46077-0x1007703ccc10001 connected 2023-05-29 12:56:57,107 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:56:57,108 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:56:57,109 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:56:57,111 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46077 2023-05-29 12:56:57,111 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46077 2023-05-29 12:56:57,111 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46077 2023-05-29 12:56:57,111 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46077 2023-05-29 12:56:57,112 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46077 2023-05-29 12:56:57,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,116 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:56:57,116 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,117 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:56:57,117 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:56:57,117 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,118 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:56:57,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:56:57,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35997,1685365017047 from backup master directory 2023-05-29 12:56:57,121 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,121 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:56:57,121 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:56:57,121 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,140 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/hbase.id with ID: 9ce537c2-7e48-4f8f-a823-3ff3e733f0be 2023-05-29 12:56:57,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:57,155 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x73fb4103 to 127.0.0.1:51115 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:56:57,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6960d2d3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:56:57,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:56:57,176 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 12:56:57,176 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:56:57,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store-tmp 2023-05-29 12:56:57,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:57,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:56:57,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:57,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:57,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:56:57,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:57,192 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:56:57,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:56:57,193 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35997%2C1685365017047, suffix=, logDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047, archiveDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/oldWALs, maxLogs=10 2023-05-29 12:56:57,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047/jenkins-hbase4.apache.org%2C35997%2C1685365017047.1685365017197 2023-05-29 12:56:57,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK], DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]] 2023-05-29 12:56:57,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:56:57,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:57,212 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:56:57,212 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:56:57,214 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:56:57,217 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 12:56:57,217 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 12:56:57,218 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:56:57,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:56:57,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:56:57,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:56:57,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=725047, jitterRate=-0.07805567979812622}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:56:57,229 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:56:57,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 12:56:57,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 12:56:57,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 12:56:57,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 12:56:57,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-29 12:56:57,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 12:56:57,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 12:56:57,236 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 12:56:57,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 12:56:57,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 12:56:57,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 12:56:57,254 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 12:56:57,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 12:56:57,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 12:56:57,261 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,262 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 12:56:57,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 12:56:57,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 12:56:57,266 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:56:57,266 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:56:57,266 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35997,1685365017047, sessionid=0x1007703ccc10000, setting cluster-up flag (Was=false) 2023-05-29 12:56:57,273 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 12:56:57,281 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,286 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 12:56:57,294 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:57,295 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.hbase-snapshot/.tmp 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:56:57,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685365047309 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 12:56:57,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 12:56:57,311 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:56:57,311 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 12:56:57,313 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:56:57,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,324 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(951): ClusterId : 9ce537c2-7e48-4f8f-a823-3ff3e733f0be 2023-05-29 12:56:57,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 12:56:57,326 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 12:56:57,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 12:56:57,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 12:56:57,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 12:56:57,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 12:56:57,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365017327,5,FailOnTimeoutGroup] 2023-05-29 12:56:57,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365017327,5,FailOnTimeoutGroup] 2023-05-29 12:56:57,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 12:56:57,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,329 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 12:56:57,329 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 12:56:57,334 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 12:56:57,335 DEBUG [RS:0;jenkins-hbase4:46077] zookeeper.ReadOnlyZKClient(139): Connect 0x732f681b to 127.0.0.1:51115 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:56:57,341 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:56:57,342 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:56:57,342 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124 2023-05-29 12:56:57,343 DEBUG [RS:0;jenkins-hbase4:46077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7de8980e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:56:57,343 DEBUG [RS:0;jenkins-hbase4:46077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79d7036b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:56:57,353 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:57,355 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:56:57,357 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/info 2023-05-29 12:56:57,358 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:56:57,358 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46077 2023-05-29 12:56:57,358 INFO [RS:0;jenkins-hbase4:46077] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 12:56:57,358 INFO [RS:0;jenkins-hbase4:46077] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 12:56:57,358 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 12:56:57,359 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,359 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:56:57,359 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35997,1685365017047 with isa=jenkins-hbase4.apache.org/172.31.14.131:46077, startcode=1685365017095 2023-05-29 12:56:57,359 DEBUG [RS:0;jenkins-hbase4:46077] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 12:56:57,361 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:56:57,362 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:56:57,363 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42479, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 12:56:57,363 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,364 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:56:57,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,366 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124 2023-05-29 12:56:57,366 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39485 2023-05-29 12:56:57,366 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 12:56:57,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/table 2023-05-29 12:56:57,367 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:56:57,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,369 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:56:57,370 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740 2023-05-29 12:56:57,371 DEBUG [RS:0;jenkins-hbase4:46077] zookeeper.ZKUtil(162): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,371 WARN [RS:0;jenkins-hbase4:46077] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:56:57,371 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740 2023-05-29 12:56:57,371 INFO [RS:0;jenkins-hbase4:46077] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:56:57,371 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1946): logDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,372 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46077,1685365017095] 2023-05-29 12:56:57,375 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:56:57,375 DEBUG [RS:0;jenkins-hbase4:46077] zookeeper.ZKUtil(162): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,376 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:56:57,377 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 12:56:57,377 INFO [RS:0;jenkins-hbase4:46077] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 12:56:57,382 INFO [RS:0;jenkins-hbase4:46077] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 12:56:57,383 INFO [RS:0;jenkins-hbase4:46077] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:56:57,383 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:56:57,383 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,383 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 12:56:57,384 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=699350, jitterRate=-0.1107315868139267}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:56:57,384 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:56:57,384 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:56:57,384 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:56:57,384 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:56:57,384 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:56:57,384 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:56:57,384 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,384 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,384 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,385 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,385 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,386 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,386 DEBUG [RS:0;jenkins-hbase4:46077] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:57,386 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,386 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,387 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,387 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:56:57,387 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 12:56:57,387 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 12:56:57,389 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 12:56:57,390 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 12:56:57,400 INFO [RS:0;jenkins-hbase4:46077] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 12:56:57,400 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46077,1685365017095-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,411 INFO [RS:0;jenkins-hbase4:46077] regionserver.Replication(203): jenkins-hbase4.apache.org,46077,1685365017095 started 2023-05-29 12:56:57,411 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46077,1685365017095, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46077, sessionid=0x1007703ccc10001 2023-05-29 12:56:57,411 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 12:56:57,411 DEBUG [RS:0;jenkins-hbase4:46077] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,411 DEBUG [RS:0;jenkins-hbase4:46077] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46077,1685365017095' 2023-05-29 12:56:57,411 DEBUG [RS:0;jenkins-hbase4:46077] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:56:57,411 DEBUG [RS:0;jenkins-hbase4:46077] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:56:57,412 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 12:56:57,412 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 12:56:57,412 DEBUG [RS:0;jenkins-hbase4:46077] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,412 DEBUG [RS:0;jenkins-hbase4:46077] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46077,1685365017095' 2023-05-29 12:56:57,412 DEBUG [RS:0;jenkins-hbase4:46077] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 12:56:57,412 DEBUG [RS:0;jenkins-hbase4:46077] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 12:56:57,413 DEBUG [RS:0;jenkins-hbase4:46077] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 12:56:57,413 INFO [RS:0;jenkins-hbase4:46077] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 12:56:57,413 INFO [RS:0;jenkins-hbase4:46077] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 12:56:57,515 INFO [RS:0;jenkins-hbase4:46077] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46077%2C1685365017095, suffix=, logDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095, archiveDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/oldWALs, maxLogs=32 2023-05-29 12:56:57,526 INFO [RS:0;jenkins-hbase4:46077] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.1685365017517 2023-05-29 12:56:57,526 DEBUG [RS:0;jenkins-hbase4:46077] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK], DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]] 2023-05-29 12:56:57,541 DEBUG [jenkins-hbase4:35997] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 12:56:57,541 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46077,1685365017095, state=OPENING 2023-05-29 12:56:57,543 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 12:56:57,544 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:57,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46077,1685365017095}] 2023-05-29 12:56:57,544 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:56:57,708 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:57,708 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 12:56:57,711 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57164, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 12:56:57,716 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 12:56:57,716 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:56:57,718 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46077%2C1685365017095.meta, suffix=.meta, logDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095, archiveDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/oldWALs, maxLogs=32 2023-05-29 12:56:57,747 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.meta.1685365017720.meta 2023-05-29 12:56:57,747 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK], DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] 2023-05-29 12:56:57,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:56:57,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 12:56:57,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 12:56:57,749 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 12:56:57,749 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 12:56:57,749 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:57,749 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 12:56:57,749 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 12:56:57,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:56:57,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/info 2023-05-29 12:56:57,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/info 2023-05-29 12:56:57,753 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:56:57,754 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,754 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:56:57,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:56:57,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:56:57,756 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:56:57,757 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,757 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:56:57,758 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/table 2023-05-29 12:56:57,758 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740/table 2023-05-29 12:56:57,760 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:56:57,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:57,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740 2023-05-29 12:56:57,763 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/meta/1588230740 2023-05-29 12:56:57,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:56:57,768 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:56:57,769 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=763174, jitterRate=-0.02957455813884735}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:56:57,769 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:56:57,771 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685365017708 2023-05-29 12:56:57,774 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 12:56:57,774 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 12:56:57,775 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46077,1685365017095, state=OPEN 2023-05-29 12:56:57,777 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 12:56:57,777 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:56:57,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 12:56:57,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46077,1685365017095 in 233 msec 2023-05-29 12:56:57,782 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 12:56:57,783 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 393 msec 2023-05-29 12:56:57,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 484 msec 2023-05-29 12:56:57,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685365017785, completionTime=-1 2023-05-29 12:56:57,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 12:56:57,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 12:56:57,790 DEBUG [hconnection-0x61feb787-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:56:57,792 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57178, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:56:57,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 12:56:57,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685365077794 2023-05-29 12:56:57,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685365137794 2023-05-29 12:56:57,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35997,1685365017047-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35997,1685365017047-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35997,1685365017047-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35997, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 12:56:57,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:56:57,807 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 12:56:57,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 12:56:57,808 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:56:57,810 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:56:57,812 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:57,812 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c empty. 2023-05-29 12:56:57,813 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:57,813 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 12:56:57,825 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 12:56:57,826 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6ca12e0044f32dc0713c67283400584c, NAME => 'hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp 2023-05-29 12:56:57,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:57,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6ca12e0044f32dc0713c67283400584c, disabling compactions & flushes 2023-05-29 12:56:57,835 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:57,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:57,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. after waiting 0 ms 2023-05-29 12:56:57,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:57,836 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:57,836 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6ca12e0044f32dc0713c67283400584c: 2023-05-29 12:56:57,838 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:56:57,840 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365017839"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365017839"}]},"ts":"1685365017839"} 2023-05-29 12:56:57,842 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:56:57,843 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:56:57,844 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365017843"}]},"ts":"1685365017843"} 2023-05-29 12:56:57,845 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 12:56:57,852 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6ca12e0044f32dc0713c67283400584c, ASSIGN}] 2023-05-29 12:56:57,853 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6ca12e0044f32dc0713c67283400584c, ASSIGN 2023-05-29 12:56:57,855 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6ca12e0044f32dc0713c67283400584c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46077,1685365017095; forceNewPlan=false, retain=false 2023-05-29 12:56:58,006 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6ca12e0044f32dc0713c67283400584c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:58,006 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365018005"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365018005"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365018005"}]},"ts":"1685365018005"} 2023-05-29 12:56:58,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 6ca12e0044f32dc0713c67283400584c, server=jenkins-hbase4.apache.org,46077,1685365017095}] 2023-05-29 12:56:58,166 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:58,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6ca12e0044f32dc0713c67283400584c, NAME => 'hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:56:58,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:58,167 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,167 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,168 INFO [StoreOpener-6ca12e0044f32dc0713c67283400584c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,170 DEBUG [StoreOpener-6ca12e0044f32dc0713c67283400584c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c/info 2023-05-29 12:56:58,170 DEBUG [StoreOpener-6ca12e0044f32dc0713c67283400584c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c/info 2023-05-29 12:56:58,170 INFO [StoreOpener-6ca12e0044f32dc0713c67283400584c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6ca12e0044f32dc0713c67283400584c columnFamilyName info 2023-05-29 12:56:58,171 INFO [StoreOpener-6ca12e0044f32dc0713c67283400584c-1] regionserver.HStore(310): Store=6ca12e0044f32dc0713c67283400584c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:58,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,173 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:56:58,178 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/hbase/namespace/6ca12e0044f32dc0713c67283400584c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:56:58,178 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6ca12e0044f32dc0713c67283400584c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=883478, jitterRate=0.12340083718299866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:56:58,179 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6ca12e0044f32dc0713c67283400584c: 2023-05-29 12:56:58,180 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c., pid=6, masterSystemTime=1685365018161 2023-05-29 12:56:58,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:58,182 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:56:58,183 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6ca12e0044f32dc0713c67283400584c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:58,184 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365018183"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365018183"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365018183"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365018183"}]},"ts":"1685365018183"} 2023-05-29 12:56:58,188 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 12:56:58,188 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 6ca12e0044f32dc0713c67283400584c, server=jenkins-hbase4.apache.org,46077,1685365017095 in 177 msec 2023-05-29 12:56:58,191 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 12:56:58,191 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6ca12e0044f32dc0713c67283400584c, ASSIGN in 336 msec 2023-05-29 12:56:58,192 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:56:58,193 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365018192"}]},"ts":"1685365018192"} 2023-05-29 12:56:58,194 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 12:56:58,197 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:56:58,199 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 392 msec 2023-05-29 12:56:58,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 12:56:58,210 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:56:58,210 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:58,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 12:56:58,223 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:56:58,227 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-29 12:56:58,237 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 12:56:58,245 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:56:58,250 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-29 12:56:58,261 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 12:56:58,264 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 12:56:58,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.143sec 2023-05-29 12:56:58,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 12:56:58,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 12:56:58,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 12:56:58,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35997,1685365017047-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 12:56:58,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35997,1685365017047-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 12:56:58,267 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 12:56:58,325 DEBUG [Listener at localhost/37799] zookeeper.ReadOnlyZKClient(139): Connect 0x629c0801 to 127.0.0.1:51115 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:56:58,329 DEBUG [Listener at localhost/37799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1390ee8b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:56:58,331 DEBUG [hconnection-0x2a6ef6ae-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:56:58,333 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:56:58,335 INFO [Listener at localhost/37799] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:56:58,335 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:58,339 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 12:56:58,339 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:56:58,339 INFO [Listener at localhost/37799] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:56:58,352 INFO [Listener at localhost/37799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:56:58,354 INFO [Listener at localhost/37799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43675 2023-05-29 12:56:58,354 INFO [Listener at localhost/37799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 12:56:58,356 DEBUG [Listener at localhost/37799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 12:56:58,356 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:58,357 INFO [Listener at localhost/37799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:56:58,358 INFO [Listener at localhost/37799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43675 connecting to ZooKeeper ensemble=127.0.0.1:51115 2023-05-29 12:56:58,362 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:436750x0, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:56:58,363 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43675-0x1007703ccc10005 connected 2023-05-29 12:56:58,364 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(162): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:56:58,364 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(162): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-29 12:56:58,365 DEBUG [Listener at localhost/37799] zookeeper.ZKUtil(164): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:56:58,365 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43675 2023-05-29 12:56:58,366 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43675 2023-05-29 12:56:58,366 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43675 2023-05-29 12:56:58,367 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43675 2023-05-29 12:56:58,367 DEBUG [Listener at localhost/37799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43675 2023-05-29 12:56:58,376 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(951): ClusterId : 9ce537c2-7e48-4f8f-a823-3ff3e733f0be 2023-05-29 12:56:58,376 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 12:56:58,378 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 12:56:58,378 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 12:56:58,380 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 12:56:58,380 DEBUG [RS:1;jenkins-hbase4:43675] zookeeper.ReadOnlyZKClient(139): Connect 0x6494bdbe to 127.0.0.1:51115 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:56:58,384 DEBUG [RS:1;jenkins-hbase4:43675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a534f5c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:56:58,384 DEBUG [RS:1;jenkins-hbase4:43675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e6cd6ce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:56:58,394 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43675 2023-05-29 12:56:58,394 INFO [RS:1;jenkins-hbase4:43675] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 12:56:58,395 INFO [RS:1;jenkins-hbase4:43675] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 12:56:58,395 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 12:56:58,395 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35997,1685365017047 with isa=jenkins-hbase4.apache.org/172.31.14.131:43675, startcode=1685365018351 2023-05-29 12:56:58,395 DEBUG [RS:1;jenkins-hbase4:43675] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 12:56:58,398 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50347, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 12:56:58,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,399 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124 2023-05-29 12:56:58,399 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39485 2023-05-29 12:56:58,399 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 12:56:58,402 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:56:58,402 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:56:58,402 DEBUG [RS:1;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,402 WARN [RS:1;jenkins-hbase4:43675] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:56:58,402 INFO [RS:1;jenkins-hbase4:43675] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:56:58,402 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43675,1685365018351] 2023-05-29 12:56:58,402 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1946): logDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:58,407 DEBUG [RS:1;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,408 DEBUG [RS:1;jenkins-hbase4:43675] zookeeper.ZKUtil(162): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:56:58,408 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 12:56:58,409 INFO [RS:1;jenkins-hbase4:43675] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 12:56:58,411 INFO [RS:1;jenkins-hbase4:43675] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 12:56:58,412 INFO [RS:1;jenkins-hbase4:43675] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:56:58,412 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:58,412 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 12:56:58,413 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,414 DEBUG [RS:1;jenkins-hbase4:43675] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:56:58,420 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:58,420 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:58,420 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:58,431 INFO [RS:1;jenkins-hbase4:43675] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 12:56:58,432 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43675,1685365018351-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:56:58,443 INFO [RS:1;jenkins-hbase4:43675] regionserver.Replication(203): jenkins-hbase4.apache.org,43675,1685365018351 started 2023-05-29 12:56:58,443 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43675,1685365018351, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43675, sessionid=0x1007703ccc10005 2023-05-29 12:56:58,443 INFO [Listener at localhost/37799] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:43675,5,FailOnTimeoutGroup] 2023-05-29 12:56:58,443 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 12:56:58,443 INFO [Listener at localhost/37799] wal.TestLogRolling(323): Replication=2 2023-05-29 12:56:58,443 DEBUG [RS:1;jenkins-hbase4:43675] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,444 DEBUG [RS:1;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43675,1685365018351' 2023-05-29 12:56:58,444 DEBUG [RS:1;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:56:58,445 DEBUG [RS:1;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:56:58,446 DEBUG [Listener at localhost/37799] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 12:56:58,446 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 12:56:58,446 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 12:56:58,446 DEBUG [RS:1;jenkins-hbase4:43675] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,447 DEBUG [RS:1;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43675,1685365018351' 2023-05-29 12:56:58,447 DEBUG [RS:1;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 12:56:58,447 DEBUG [RS:1;jenkins-hbase4:43675] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 12:56:58,448 DEBUG [RS:1;jenkins-hbase4:43675] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 12:56:58,448 INFO [RS:1;jenkins-hbase4:43675] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 12:56:58,448 INFO [RS:1;jenkins-hbase4:43675] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 12:56:58,449 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59878, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 12:56:58,451 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 12:56:58,451 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 12:56:58,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:56:58,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-29 12:56:58,455 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:56:58,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-29 12:56:58,456 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:56:58,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:56:58,458 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,459 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77 empty. 2023-05-29 12:56:58,459 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,460 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-29 12:56:58,474 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-29 12:56:58,475 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8cf0384708e8e810674c0bd362349a77, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/.tmp 2023-05-29 12:56:58,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:58,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 8cf0384708e8e810674c0bd362349a77, disabling compactions & flushes 2023-05-29 12:56:58,486 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. after waiting 0 ms 2023-05-29 12:56:58,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,486 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 8cf0384708e8e810674c0bd362349a77: 2023-05-29 12:56:58,489 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:56:58,491 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685365018491"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365018491"}]},"ts":"1685365018491"} 2023-05-29 12:56:58,493 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:56:58,494 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:56:58,495 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365018494"}]},"ts":"1685365018494"} 2023-05-29 12:56:58,496 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-29 12:56:58,503 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-05-29 12:56:58,505 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-29 12:56:58,505 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-29 12:56:58,505 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-29 12:56:58,506 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8cf0384708e8e810674c0bd362349a77, ASSIGN}] 2023-05-29 12:56:58,507 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8cf0384708e8e810674c0bd362349a77, ASSIGN 2023-05-29 12:56:58,508 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8cf0384708e8e810674c0bd362349a77, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43675,1685365018351; forceNewPlan=false, retain=false 2023-05-29 12:56:58,551 INFO [RS:1;jenkins-hbase4:43675] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43675%2C1685365018351, suffix=, logDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351, archiveDir=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/oldWALs, maxLogs=32 2023-05-29 12:56:58,562 INFO [RS:1;jenkins-hbase4:43675] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552 2023-05-29 12:56:58,563 DEBUG [RS:1;jenkins-hbase4:43675] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK], DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] 2023-05-29 12:56:58,661 INFO [jenkins-hbase4:35997] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-29 12:56:58,662 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8cf0384708e8e810674c0bd362349a77, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,662 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685365018662"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365018662"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365018662"}]},"ts":"1685365018662"} 2023-05-29 12:56:58,664 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 8cf0384708e8e810674c0bd362349a77, server=jenkins-hbase4.apache.org,43675,1685365018351}] 2023-05-29 12:56:58,818 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,819 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 12:56:58,821 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56572, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 12:56:58,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8cf0384708e8e810674c0bd362349a77, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:56:58,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:56:58,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,829 INFO [StoreOpener-8cf0384708e8e810674c0bd362349a77-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,830 DEBUG [StoreOpener-8cf0384708e8e810674c0bd362349a77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info 2023-05-29 12:56:58,830 DEBUG [StoreOpener-8cf0384708e8e810674c0bd362349a77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info 2023-05-29 12:56:58,831 INFO [StoreOpener-8cf0384708e8e810674c0bd362349a77-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8cf0384708e8e810674c0bd362349a77 columnFamilyName info 2023-05-29 12:56:58,832 INFO [StoreOpener-8cf0384708e8e810674c0bd362349a77-1] regionserver.HStore(310): Store=8cf0384708e8e810674c0bd362349a77/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:56:58,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:56:58,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:56:58,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8cf0384708e8e810674c0bd362349a77; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=764872, jitterRate=-0.02741585671901703}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:56:58,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8cf0384708e8e810674c0bd362349a77: 2023-05-29 12:56:58,843 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77., pid=11, masterSystemTime=1685365018818 2023-05-29 12:56:58,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,846 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:56:58,847 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8cf0384708e8e810674c0bd362349a77, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:56:58,847 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685365018847"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365018847"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365018847"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365018847"}]},"ts":"1685365018847"} 2023-05-29 12:56:58,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 12:56:58,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 8cf0384708e8e810674c0bd362349a77, server=jenkins-hbase4.apache.org,43675,1685365018351 in 185 msec 2023-05-29 12:56:58,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 12:56:58,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8cf0384708e8e810674c0bd362349a77, ASSIGN in 346 msec 2023-05-29 12:56:58,856 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:56:58,857 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365018857"}]},"ts":"1685365018857"} 2023-05-29 12:56:58,858 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-29 12:56:58,861 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:56:58,863 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 410 msec 2023-05-29 12:57:00,967 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 12:57:03,377 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 12:57:03,378 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 12:57:04,409 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-29 12:57:08,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:57:08,458 INFO [Listener at localhost/37799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-29 12:57:08,461 DEBUG [Listener at localhost/37799] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-29 12:57:08,461 DEBUG [Listener at localhost/37799] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:57:08,474 WARN [Listener at localhost/37799] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:08,477 WARN [Listener at localhost/37799] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:08,478 INFO [Listener at localhost/37799] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:08,482 INFO [Listener at localhost/37799] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_45441_datanode____wcsh9z/webapp 2023-05-29 12:57:08,572 INFO [Listener at localhost/37799] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45441 2023-05-29 12:57:08,582 WARN [Listener at localhost/44371] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:08,598 WARN [Listener at localhost/44371] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:08,601 WARN [Listener at localhost/44371] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:08,603 INFO [Listener at localhost/44371] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:08,607 INFO [Listener at localhost/44371] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_36009_datanode____.wji143/webapp 2023-05-29 12:57:08,678 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1c56a1954cb68b72: Processing first storage report for DS-b67be958-5caa-407a-a7d0-7b80ffcec15b from datanode b30a3a9f-2d2c-426f-a710-b1d25e4d32b4 2023-05-29 12:57:08,678 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1c56a1954cb68b72: from storage DS-b67be958-5caa-407a-a7d0-7b80ffcec15b node DatanodeRegistration(127.0.0.1:45517, datanodeUuid=b30a3a9f-2d2c-426f-a710-b1d25e4d32b4, infoPort=33831, infoSecurePort=0, ipcPort=44371, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:08,679 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1c56a1954cb68b72: Processing first storage report for DS-ff14d32c-0fcc-4ab0-9e48-5e59de9156ae from datanode b30a3a9f-2d2c-426f-a710-b1d25e4d32b4 2023-05-29 12:57:08,679 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1c56a1954cb68b72: from storage DS-ff14d32c-0fcc-4ab0-9e48-5e59de9156ae node DatanodeRegistration(127.0.0.1:45517, datanodeUuid=b30a3a9f-2d2c-426f-a710-b1d25e4d32b4, infoPort=33831, infoSecurePort=0, ipcPort=44371, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:08,713 INFO [Listener at localhost/44371] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36009 2023-05-29 12:57:08,722 WARN [Listener at localhost/37803] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:08,740 WARN [Listener at localhost/37803] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:08,742 WARN [Listener at localhost/37803] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:08,743 INFO [Listener at localhost/37803] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:08,747 INFO [Listener at localhost/37803] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_34877_datanode____.72yema/webapp 2023-05-29 12:57:08,822 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6bed5d691feec050: Processing first storage report for DS-c39cfb7e-f018-454c-a491-a01703c367f0 from datanode c8645d32-4136-47c0-a507-195bcd8330c7 2023-05-29 12:57:08,822 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6bed5d691feec050: from storage DS-c39cfb7e-f018-454c-a491-a01703c367f0 node DatanodeRegistration(127.0.0.1:44799, datanodeUuid=c8645d32-4136-47c0-a507-195bcd8330c7, infoPort=42857, infoSecurePort=0, ipcPort=37803, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:08,822 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6bed5d691feec050: Processing first storage report for DS-2a6fef67-1f95-459f-8ba0-9cf53f54f163 from datanode c8645d32-4136-47c0-a507-195bcd8330c7 2023-05-29 12:57:08,822 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6bed5d691feec050: from storage DS-2a6fef67-1f95-459f-8ba0-9cf53f54f163 node DatanodeRegistration(127.0.0.1:44799, datanodeUuid=c8645d32-4136-47c0-a507-195bcd8330c7, infoPort=42857, infoSecurePort=0, ipcPort=37803, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:08,853 INFO [Listener at localhost/37803] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34877 2023-05-29 12:57:08,862 WARN [Listener at localhost/45381] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:08,965 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a442285639dbe45: Processing first storage report for DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a from datanode 8da5704b-f8c9-48bf-a7ec-3b14571f4aae 2023-05-29 12:57:08,965 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a442285639dbe45: from storage DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a node DatanodeRegistration(127.0.0.1:43653, datanodeUuid=8da5704b-f8c9-48bf-a7ec-3b14571f4aae, infoPort=33359, infoSecurePort=0, ipcPort=45381, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:57:08,966 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a442285639dbe45: Processing first storage report for DS-1b53e0be-e2dc-4017-8385-c68e757810d7 from datanode 8da5704b-f8c9-48bf-a7ec-3b14571f4aae 2023-05-29 12:57:08,966 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a442285639dbe45: from storage DS-1b53e0be-e2dc-4017-8385-c68e757810d7 node DatanodeRegistration(127.0.0.1:43653, datanodeUuid=8da5704b-f8c9-48bf-a7ec-3b14571f4aae, infoPort=33359, infoSecurePort=0, ipcPort=45381, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:08,968 WARN [Listener at localhost/45381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:08,970 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:08,970 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:08,972 WARN [DataStreamer for file /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552 block BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK], DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]) is bad. 2023-05-29 12:57:08,972 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 12:57:08,972 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 12:57:08,973 WARN [DataStreamer for file /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047/jenkins-hbase4.apache.org%2C35997%2C1685365017047.1685365017197 block BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK], DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]) is bad. 2023-05-29 12:57:08,973 WARN [PacketResponder: BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33483]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,973 WARN [DataStreamer for file /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.meta.1685365017720.meta block BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK], DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]) is bad. 2023-05-29 12:57:08,973 WARN [PacketResponder: BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33483]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,973 WARN [DataStreamer for file /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.1685365017517 block BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK], DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]) is bad. 2023-05-29 12:57:08,983 INFO [Listener at localhost/45381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:08,985 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1129005207_17 at /127.0.0.1:58074 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58074 dst: /127.0.0.1:34757 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,985 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:58106 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58106 dst: /127.0.0.1:34757 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,987 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:58182 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58182 dst: /127.0.0.1:34757 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34757 remote=/127.0.0.1:58182]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,987 WARN [PacketResponder: BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34757]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,987 WARN [PacketResponder: BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34757]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,987 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:58118 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58118 dst: /127.0.0.1:34757 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34757 remote=/127.0.0.1:58118]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,989 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:54742 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:33483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54742 dst: /127.0.0.1:33483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:08,990 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:54702 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54702 dst: /127.0.0.1:33483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,011 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1039722355-172.31.14.131-1685365016397 (Datanode Uuid 6191f094-b773-4427-9636-a59e1a64a84c) service to localhost/127.0.0.1:39485 2023-05-29 12:57:09,012 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data3/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:09,012 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data4/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:09,088 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:54700 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54700 dst: /127.0.0.1:33483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,089 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1129005207_17 at /127.0.0.1:54660 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54660 dst: /127.0.0.1:33483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,091 WARN [Listener at localhost/45381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:09,091 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:09,091 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:09,091 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:09,091 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:09,098 INFO [Listener at localhost/45381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:09,201 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1129005207_17 at /127.0.0.1:58850 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58850 dst: /127.0.0.1:34757 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,202 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:58866 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58866 dst: /127.0.0.1:34757 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,202 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:58884 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58884 dst: /127.0.0.1:34757 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,202 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:58880 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34757:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58880 dst: /127.0.0.1:34757 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:09,203 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:57:09,204 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1039722355-172.31.14.131-1685365016397 (Datanode Uuid ea7f5168-7fa6-4973-a1fc-02e009c04a5b) service to localhost/127.0.0.1:39485 2023-05-29 12:57:09,205 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data1/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:09,205 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data2/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:09,210 DEBUG [Listener at localhost/45381] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:57:09,212 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54082, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:57:09,213 WARN [RS:1;jenkins-hbase4:43675.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:09,214 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43675%2C1685365018351:(num 1685365018552) roll requested 2023-05-29 12:57:09,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43675] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:09,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43675] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:54082 deadline: 1685365039213, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-29 12:57:09,219 WARN [Thread-629] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741839_1019 2023-05-29 12:57:09,222 WARN [Thread-629] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK] 2023-05-29 12:57:09,231 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-29 12:57:09,231 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552 with entries=1, filesize=466 B; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 2023-05-29 12:57:09,234 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK], DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]] 2023-05-29 12:57:09,235 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:09,235 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552 is not closed yet, will try archiving it next time 2023-05-29 12:57:09,235 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:09,236 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552 to hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/oldWALs/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365018552 2023-05-29 12:57:21,251 INFO [Listener at localhost/45381] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 2023-05-29 12:57:21,251 WARN [Listener at localhost/45381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:21,253 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:21,254 WARN [DataStreamer for file /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 block BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK], DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK]) is bad. 2023-05-29 12:57:21,257 INFO [Listener at localhost/45381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:21,259 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:46358 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:45517:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46358 dst: /127.0.0.1:45517 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45517 remote=/127.0.0.1:46358]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:21,260 WARN [PacketResponder: BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45517]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:21,260 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:55064 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:43653:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55064 dst: /127.0.0.1:43653 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:21,362 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:57:21,362 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1039722355-172.31.14.131-1685365016397 (Datanode Uuid 8da5704b-f8c9-48bf-a7ec-3b14571f4aae) service to localhost/127.0.0.1:39485 2023-05-29 12:57:21,362 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data9/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:21,363 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data10/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:21,367 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]] 2023-05-29 12:57:21,367 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]] 2023-05-29 12:57:21,367 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43675%2C1685365018351:(num 1685365029214) roll requested 2023-05-29 12:57:21,371 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741841_1022 2023-05-29 12:57:21,375 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK] 2023-05-29 12:57:21,376 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741842_1023 2023-05-29 12:57:21,377 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK] 2023-05-29 12:57:21,379 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741843_1024 2023-05-29 12:57:21,380 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK] 2023-05-29 12:57:21,387 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365041367 2023-05-29 12:57:21,387 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK], DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]] 2023-05-29 12:57:21,387 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 is not closed yet, will try archiving it next time 2023-05-29 12:57:25,372 WARN [Listener at localhost/45381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:25,374 WARN [ResponseProcessor for block BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025 java.io.IOException: Bad response ERROR for BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025 from datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 12:57:25,374 WARN [DataStreamer for file /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365041367 block BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025] hdfs.DataStreamer(1548): Error Recovery for BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK], DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK]) is bad. 2023-05-29 12:57:25,374 WARN [PacketResponder: BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45517]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,376 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39750 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39750 dst: /127.0.0.1:44799 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,378 INFO [Listener at localhost/45381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:25,482 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:36406 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1025]] datanode.DataXceiver(323): 127.0.0.1:45517:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36406 dst: /127.0.0.1:45517 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,483 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:57:25,484 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1039722355-172.31.14.131-1685365016397 (Datanode Uuid b30a3a9f-2d2c-426f-a710-b1d25e4d32b4) service to localhost/127.0.0.1:39485 2023-05-29 12:57:25,484 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data5/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:25,485 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data6/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:25,489 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:25,489 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:25,489 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43675%2C1685365018351:(num 1685365041367) roll requested 2023-05-29 12:57:25,492 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741845_1027 2023-05-29 12:57:25,493 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK] 2023-05-29 12:57:25,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43675] regionserver.HRegion(9158): Flush requested on 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:57:25,494 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8cf0384708e8e810674c0bd362349a77 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:57:25,495 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741846_1028 2023-05-29 12:57:25,496 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] 2023-05-29 12:57:25,497 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741847_1029 2023-05-29 12:57:25,498 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK] 2023-05-29 12:57:25,501 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39774 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741848_1030]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current]'}, localName='127.0.0.1:44799', datanodeUuid='c8645d32-4136-47c0-a507-195bcd8330c7', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741848_1030 to mirror 127.0.0.1:33483: java.net.ConnectException: Connection refused 2023-05-29 12:57:25,501 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741848_1030 2023-05-29 12:57:25,501 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39774 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741848_1030]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39774 dst: /127.0.0.1:44799 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,501 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741849_1031 2023-05-29 12:57:25,502 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK] 2023-05-29 12:57:25,502 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK] 2023-05-29 12:57:25,502 WARN [IPC Server handler 1 on default port 39485] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-29 12:57:25,503 WARN [IPC Server handler 1 on default port 39485] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-29 12:57:25,503 WARN [IPC Server handler 1 on default port 39485] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-29 12:57:25,503 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741850_1032 2023-05-29 12:57:25,504 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK] 2023-05-29 12:57:25,505 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741852_1034 2023-05-29 12:57:25,506 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK] 2023-05-29 12:57:25,511 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39790 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741853_1035]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current]'}, localName='127.0.0.1:44799', datanodeUuid='c8645d32-4136-47c0-a507-195bcd8330c7', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741853_1035 to mirror 127.0.0.1:45517: java.net.ConnectException: Connection refused 2023-05-29 12:57:25,512 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741853_1035 2023-05-29 12:57:25,512 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39790 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741853_1035]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39790 dst: /127.0.0.1:44799 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,512 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] 2023-05-29 12:57:25,513 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365041367 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045489 2023-05-29 12:57:25,513 WARN [IPC Server handler 3 on default port 39485] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-29 12:57:25,513 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:25,513 WARN [IPC Server handler 3 on default port 39485] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-29 12:57:25,513 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365041367 is not closed yet, will try archiving it next time 2023-05-29 12:57:25,513 WARN [IPC Server handler 3 on default port 39485] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-29 12:57:25,711 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:25,711 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:25,711 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43675%2C1685365018351:(num 1685365045489) roll requested 2023-05-29 12:57:25,716 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39822 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741855_1037]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current]'}, localName='127.0.0.1:44799', datanodeUuid='c8645d32-4136-47c0-a507-195bcd8330c7', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741855_1037 to mirror 127.0.0.1:34757: java.net.ConnectException: Connection refused 2023-05-29 12:57:25,716 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741855_1037 2023-05-29 12:57:25,716 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39822 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741855_1037]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39822 dst: /127.0.0.1:44799 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,716 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK] 2023-05-29 12:57:25,719 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39836 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741856_1038]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current]'}, localName='127.0.0.1:44799', datanodeUuid='c8645d32-4136-47c0-a507-195bcd8330c7', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741856_1038 to mirror 127.0.0.1:33483: java.net.ConnectException: Connection refused 2023-05-29 12:57:25,719 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741856_1038 2023-05-29 12:57:25,719 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39836 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741856_1038]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39836 dst: /127.0.0.1:44799 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,719 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33483,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK] 2023-05-29 12:57:25,720 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741857_1039 2023-05-29 12:57:25,721 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] 2023-05-29 12:57:25,723 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39840 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741858_1040]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current]'}, localName='127.0.0.1:44799', datanodeUuid='c8645d32-4136-47c0-a507-195bcd8330c7', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741858_1040 to mirror 127.0.0.1:43653: java.net.ConnectException: Connection refused 2023-05-29 12:57:25,723 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741858_1040 2023-05-29 12:57:25,723 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1850083675_17 at /127.0.0.1:39840 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741858_1040]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39840 dst: /127.0.0.1:44799 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:25,724 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK] 2023-05-29 12:57:25,724 WARN [IPC Server handler 3 on default port 39485] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-29 12:57:25,724 WARN [IPC Server handler 3 on default port 39485] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-29 12:57:25,724 WARN [IPC Server handler 3 on default port 39485] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-29 12:57:25,729 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045489 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045712 2023-05-29 12:57:25,729 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:25,729 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365041367 is not closed yet, will try archiving it next time 2023-05-29 12:57:25,729 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045489 is not closed yet, will try archiving it next time 2023-05-29 12:57:25,914 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-29 12:57:25,916 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045489 is not closed yet, will try archiving it next time 2023-05-29 12:57:25,918 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/.tmp/info/da3d82daf82b494a8dda18e327b5cf27 2023-05-29 12:57:25,928 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/.tmp/info/da3d82daf82b494a8dda18e327b5cf27 as hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/da3d82daf82b494a8dda18e327b5cf27 2023-05-29 12:57:25,934 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/da3d82daf82b494a8dda18e327b5cf27, entries=5, sequenceid=12, filesize=10.0 K 2023-05-29 12:57:25,935 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 8cf0384708e8e810674c0bd362349a77 in 440ms, sequenceid=12, compaction requested=false 2023-05-29 12:57:25,935 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8cf0384708e8e810674c0bd362349a77: 2023-05-29 12:57:26,120 WARN [Listener at localhost/45381] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:26,122 WARN [Listener at localhost/45381] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:26,123 INFO [Listener at localhost/45381] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:26,129 INFO [Listener at localhost/45381] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/java.io.tmpdir/Jetty_localhost_44735_datanode____xikfy8/webapp 2023-05-29 12:57:26,132 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 to hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/oldWALs/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365029214 2023-05-29 12:57:26,222 INFO [Listener at localhost/45381] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44735 2023-05-29 12:57:26,230 WARN [Listener at localhost/37187] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:26,327 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3f9c7623ef293214: Processing first storage report for DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf from datanode 6191f094-b773-4427-9636-a59e1a64a84c 2023-05-29 12:57:26,328 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3f9c7623ef293214: from storage DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf node DatanodeRegistration(127.0.0.1:41957, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=44303, infoSecurePort=0, ipcPort=37187, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:57:26,328 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3f9c7623ef293214: Processing first storage report for DS-70f2fe99-e1ca-4072-9aaa-58c52aa2379b from datanode 6191f094-b773-4427-9636-a59e1a64a84c 2023-05-29 12:57:26,328 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3f9c7623ef293214: from storage DS-70f2fe99-e1ca-4072-9aaa-58c52aa2379b node DatanodeRegistration(127.0.0.1:41957, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=44303, infoSecurePort=0, ipcPort=37187, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:26,822 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@746e3082] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44799, datanodeUuid=c8645d32-4136-47c0-a507-195bcd8330c7, infoPort=42857, infoSecurePort=0, ipcPort=37803, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741844_1026 to 127.0.0.1:43653 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:26,822 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@60a83857] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44799, datanodeUuid=c8645d32-4136-47c0-a507-195bcd8330c7, infoPort=42857, infoSecurePort=0, ipcPort=37803, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741854_1036 to 127.0.0.1:43653 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:27,310 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:27,311 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35997%2C1685365017047:(num 1685365017197) roll requested 2023-05-29 12:57:27,315 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741860_1042 2023-05-29 12:57:27,316 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:27,316 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43653,DS-524a067d-2f90-41b7-ac97-b3e0c11fc75a,DISK] 2023-05-29 12:57:27,316 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:27,318 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741861_1043 2023-05-29 12:57:27,319 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] 2023-05-29 12:57:27,324 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-29 12:57:27,324 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047/jenkins-hbase4.apache.org%2C35997%2C1685365017047.1685365017197 with entries=88, filesize=43.72 KB; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047/jenkins-hbase4.apache.org%2C35997%2C1685365017047.1685365047311 2023-05-29 12:57:27,324 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK], DatanodeInfoWithStorage[127.0.0.1:41957,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]] 2023-05-29 12:57:27,324 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047/jenkins-hbase4.apache.org%2C35997%2C1685365017047.1685365017197 is not closed yet, will try archiving it next time 2023-05-29 12:57:27,324 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:27,325 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047/jenkins-hbase4.apache.org%2C35997%2C1685365017047.1685365017197; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:27,821 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@30a41cdc] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44799, datanodeUuid=c8645d32-4136-47c0-a507-195bcd8330c7, infoPort=42857, infoSecurePort=0, ipcPort=37803, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741851_1033 to 127.0.0.1:45517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:39,327 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@49e4a821] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:41957, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=44303, infoSecurePort=0, ipcPort=37187, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741835_1011 to 127.0.0.1:45517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:40,328 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@22793d4e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:41957, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=44303, infoSecurePort=0, ipcPort=37187, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741831_1007 to 127.0.0.1:45517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:42,328 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@11e1128b] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:41957, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=44303, infoSecurePort=0, ipcPort=37187, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741826_1002 to 127.0.0.1:45517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:44,770 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1129005207_17 at /127.0.0.1:45398 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741863_1045]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data4/current]'}, localName='127.0.0.1:41957', datanodeUuid='6191f094-b773-4427-9636-a59e1a64a84c', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741863_1045 to mirror 127.0.0.1:45517: java.net.ConnectException: Connection refused 2023-05-29 12:57:44,770 WARN [Thread-720] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741863_1045 2023-05-29 12:57:44,770 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1129005207_17 at /127.0.0.1:45398 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741863_1045]] datanode.DataXceiver(323): 127.0.0.1:41957:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45398 dst: /127.0.0.1:41957 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:44,771 WARN [Thread-720] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] 2023-05-29 12:57:44,778 INFO [Listener at localhost/37187] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045712 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365064765 2023-05-29 12:57:44,778 DEBUG [Listener at localhost/37187] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK], DatanodeInfoWithStorage[127.0.0.1:41957,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK]] 2023-05-29 12:57:44,778 DEBUG [Listener at localhost/37187] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351/jenkins-hbase4.apache.org%2C43675%2C1685365018351.1685365045712 is not closed yet, will try archiving it next time 2023-05-29 12:57:44,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43675] regionserver.HRegion(9158): Flush requested on 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:57:44,783 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8cf0384708e8e810674c0bd362349a77 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-29 12:57:44,784 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-29 12:57:44,797 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 12:57:44,798 INFO [Listener at localhost/37187] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 12:57:44,798 DEBUG [Listener at localhost/37187] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x629c0801 to 127.0.0.1:51115 2023-05-29 12:57:44,798 DEBUG [Listener at localhost/37187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:44,798 DEBUG [Listener at localhost/37187] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 12:57:44,798 DEBUG [Listener at localhost/37187] util.JVMClusterUtil(257): Found active master hash=1709647378, stopped=false 2023-05-29 12:57:44,798 INFO [Listener at localhost/37187] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:57:44,800 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:57:44,800 INFO [Listener at localhost/37187] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 12:57:44,800 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:57:44,800 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:57:44,800 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:44,801 DEBUG [Listener at localhost/37187] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x73fb4103 to 127.0.0.1:51115 2023-05-29 12:57:44,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:57:44,801 DEBUG [Listener at localhost/37187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:44,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:57:44,801 INFO [Listener at localhost/37187] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46077,1685365017095' ***** 2023-05-29 12:57:44,801 INFO [Listener at localhost/37187] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 12:57:44,801 INFO [Listener at localhost/37187] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,43675,1685365018351' ***** 2023-05-29 12:57:44,801 INFO [Listener at localhost/37187] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 12:57:44,808 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:57:44,810 INFO [RS:1;jenkins-hbase4:43675] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 12:57:44,811 INFO [RS:0;jenkins-hbase4:46077] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 12:57:44,811 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 12:57:44,811 INFO [RS:0;jenkins-hbase4:46077] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 12:57:44,811 INFO [RS:0;jenkins-hbase4:46077] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 12:57:44,811 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(3303): Received CLOSE for 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:57:44,818 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:57:44,819 DEBUG [RS:0;jenkins-hbase4:46077] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x732f681b to 127.0.0.1:51115 2023-05-29 12:57:44,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6ca12e0044f32dc0713c67283400584c, disabling compactions & flushes 2023-05-29 12:57:44,819 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/.tmp/info/9dd41677d6624b14a0bb826e3cfa26de 2023-05-29 12:57:44,819 DEBUG [RS:0;jenkins-hbase4:46077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:44,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:44,819 INFO [RS:0;jenkins-hbase4:46077] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 12:57:44,820 INFO [RS:0;jenkins-hbase4:46077] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 12:57:44,820 INFO [RS:0;jenkins-hbase4:46077] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 12:57:44,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:44,820 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 12:57:44,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. after waiting 0 ms 2023-05-29 12:57:44,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:44,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6ca12e0044f32dc0713c67283400584c 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 12:57:44,820 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-29 12:57:44,820 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 6ca12e0044f32dc0713c67283400584c=hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c.} 2023-05-29 12:57:44,820 WARN [RS:0;jenkins-hbase4:46077.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:44,821 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:57:44,821 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46077%2C1685365017095:(num 1685365017517) roll requested 2023-05-29 12:57:44,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6ca12e0044f32dc0713c67283400584c: 2023-05-29 12:57:44,821 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,46077,1685365017095: Unrecoverable exception while closing hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:44,820 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1504): Waiting on 1588230740, 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:57:44,822 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-29 12:57:44,821 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:57:44,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:57:44,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:57:44,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:57:44,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:57:44,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-29 12:57:44,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-29 12:57:44,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-29 12:57:44,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-29 12:57:44,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-29 12:57:44,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 992477184, "init": 513802240, "max": 2051014656, "used": 348552760 }, "NonHeapMemoryUsage": { "committed": 133718016, "init": 2555904, "max": -1, "used": 131028520 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-29 12:57:44,833 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:54970 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741866_1048]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current]'}, localName='127.0.0.1:44799', datanodeUuid='c8645d32-4136-47c0-a507-195bcd8330c7', xmitsInProgress=0}:Exception transfering block BP-1039722355-172.31.14.131-1685365016397:blk_1073741866_1048 to mirror 127.0.0.1:45517: java.net.ConnectException: Connection refused 2023-05-29 12:57:44,833 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-1039722355-172.31.14.131-1685365016397:blk_1073741866_1048 2023-05-29 12:57:44,833 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_679236728_17 at /127.0.0.1:54970 [Receiving block BP-1039722355-172.31.14.131-1685365016397:blk_1073741866_1048]] datanode.DataXceiver(323): 127.0.0.1:44799:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54970 dst: /127.0.0.1:44799 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:44,834 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45517,DS-b67be958-5caa-407a-a7d0-7b80ffcec15b,DISK] 2023-05-29 12:57:44,835 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35997] master.MasterRpcServices(609): jenkins-hbase4.apache.org,46077,1685365017095 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,46077,1685365017095: Unrecoverable exception while closing hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:44,838 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/.tmp/info/9dd41677d6624b14a0bb826e3cfa26de as hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/9dd41677d6624b14a0bb826e3cfa26de 2023-05-29 12:57:44,843 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-29 12:57:44,843 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.1685365017517 with entries=3, filesize=600 B; new WAL /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.1685365064821 2023-05-29 12:57:44,844 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41957,DS-f4f87969-0b91-4c61-ba2a-179e64ae1acf,DISK], DatanodeInfoWithStorage[127.0.0.1:44799,DS-c39cfb7e-f018-454c-a491-a01703c367f0,DISK]] 2023-05-29 12:57:44,844 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.1685365017517 is not closed yet, will try archiving it next time 2023-05-29 12:57:44,844 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:44,844 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095/jenkins-hbase4.apache.org%2C46077%2C1685365017095.1685365017517; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:44,848 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/9dd41677d6624b14a0bb826e3cfa26de, entries=8, sequenceid=25, filesize=13.2 K 2023-05-29 12:57:44,849 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 8cf0384708e8e810674c0bd362349a77 in 66ms, sequenceid=25, compaction requested=false 2023-05-29 12:57:44,849 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8cf0384708e8e810674c0bd362349a77: 2023-05-29 12:57:44,849 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-29 12:57:44,849 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:57:44,849 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/9dd41677d6624b14a0bb826e3cfa26de because midkey is the same as first or last row 2023-05-29 12:57:44,849 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 12:57:44,849 INFO [RS:1;jenkins-hbase4:43675] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 12:57:44,849 INFO [RS:1;jenkins-hbase4:43675] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 12:57:44,849 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(3303): Received CLOSE for 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:57:44,850 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:57:44,850 DEBUG [RS:1;jenkins-hbase4:43675] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6494bdbe to 127.0.0.1:51115 2023-05-29 12:57:44,850 DEBUG [RS:1;jenkins-hbase4:43675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:44,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8cf0384708e8e810674c0bd362349a77, disabling compactions & flushes 2023-05-29 12:57:44,850 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-29 12:57:44,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:57:44,850 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1478): Online Regions={8cf0384708e8e810674c0bd362349a77=TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77.} 2023-05-29 12:57:44,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:57:44,850 DEBUG [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1504): Waiting on 8cf0384708e8e810674c0bd362349a77 2023-05-29 12:57:44,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. after waiting 0 ms 2023-05-29 12:57:44,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:57:44,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8cf0384708e8e810674c0bd362349a77 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-29 12:57:44,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/.tmp/info/0cd28cc5f3c94747ab1ed67ebce4bb7a 2023-05-29 12:57:44,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/.tmp/info/0cd28cc5f3c94747ab1ed67ebce4bb7a as hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/0cd28cc5f3c94747ab1ed67ebce4bb7a 2023-05-29 12:57:44,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/info/0cd28cc5f3c94747ab1ed67ebce4bb7a, entries=9, sequenceid=37, filesize=14.2 K 2023-05-29 12:57:44,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 8cf0384708e8e810674c0bd362349a77 in 23ms, sequenceid=37, compaction requested=true 2023-05-29 12:57:44,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8cf0384708e8e810674c0bd362349a77/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-05-29 12:57:44,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:57:44,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8cf0384708e8e810674c0bd362349a77: 2023-05-29 12:57:44,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685365018451.8cf0384708e8e810674c0bd362349a77. 2023-05-29 12:57:45,022 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 12:57:45,023 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(3303): Received CLOSE for 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:57:45,023 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1504): Waiting on 1588230740, 6ca12e0044f32dc0713c67283400584c 2023-05-29 12:57:45,023 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6ca12e0044f32dc0713c67283400584c, disabling compactions & flushes 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:57:45,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. after waiting 0 ms 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6ca12e0044f32dc0713c67283400584c: 2023-05-29 12:57:45,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685365017805.6ca12e0044f32dc0713c67283400584c. 2023-05-29 12:57:45,050 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43675,1685365018351; all regions closed. 2023-05-29 12:57:45,051 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:57:45,060 DEBUG [RS:1;jenkins-hbase4:43675] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/oldWALs 2023-05-29 12:57:45,061 INFO [RS:1;jenkins-hbase4:43675] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C43675%2C1685365018351:(num 1685365064765) 2023-05-29 12:57:45,061 DEBUG [RS:1;jenkins-hbase4:43675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:45,061 INFO [RS:1;jenkins-hbase4:43675] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:57:45,061 INFO [RS:1;jenkins-hbase4:43675] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 12:57:45,061 INFO [RS:1;jenkins-hbase4:43675] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 12:57:45,061 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:57:45,061 INFO [RS:1;jenkins-hbase4:43675] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 12:57:45,061 INFO [RS:1;jenkins-hbase4:43675] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 12:57:45,062 INFO [RS:1;jenkins-hbase4:43675] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43675 2023-05-29 12:57:45,064 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:57:45,064 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:57:45,064 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:57:45,064 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43675,1685365018351 2023-05-29 12:57:45,065 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:57:45,066 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43675,1685365018351] 2023-05-29 12:57:45,066 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43675,1685365018351; numProcessing=1 2023-05-29 12:57:45,071 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43675,1685365018351 already deleted, retry=false 2023-05-29 12:57:45,071 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43675,1685365018351 expired; onlineServers=1 2023-05-29 12:57:45,223 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-29 12:57:45,223 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46077,1685365017095; all regions closed. 2023-05-29 12:57:45,223 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:57:45,223 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:45,224 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/WALs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:57:45,230 ERROR [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... 2023-05-29 12:57:45,230 DEBUG [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34757,DS-1b911015-e18e-4414-baac-718b24225b9f,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:57:45,231 DEBUG [RS:0;jenkins-hbase4:46077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:45,231 INFO [RS:0;jenkins-hbase4:46077] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:57:45,231 INFO [RS:0;jenkins-hbase4:46077] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-29 12:57:45,231 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:57:45,231 INFO [RS:0;jenkins-hbase4:46077] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46077 2023-05-29 12:57:45,233 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46077,1685365017095 2023-05-29 12:57:45,233 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:57:45,235 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46077,1685365017095] 2023-05-29 12:57:45,235 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46077,1685365017095; numProcessing=2 2023-05-29 12:57:45,236 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46077,1685365017095 already deleted, retry=false 2023-05-29 12:57:45,236 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46077,1685365017095 expired; onlineServers=0 2023-05-29 12:57:45,236 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35997,1685365017047' ***** 2023-05-29 12:57:45,236 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 12:57:45,237 DEBUG [M:0;jenkins-hbase4:35997] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61d8be76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:57:45,237 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:57:45,237 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35997,1685365017047; all regions closed. 2023-05-29 12:57:45,237 DEBUG [M:0;jenkins-hbase4:35997] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:57:45,237 DEBUG [M:0;jenkins-hbase4:35997] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 12:57:45,237 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 12:57:45,237 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365017327] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365017327,5,FailOnTimeoutGroup] 2023-05-29 12:57:45,237 DEBUG [M:0;jenkins-hbase4:35997] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 12:57:45,237 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365017327] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365017327,5,FailOnTimeoutGroup] 2023-05-29 12:57:45,238 INFO [M:0;jenkins-hbase4:35997] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 12:57:45,238 INFO [M:0;jenkins-hbase4:35997] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 12:57:45,238 INFO [M:0;jenkins-hbase4:35997] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 12:57:45,238 DEBUG [M:0;jenkins-hbase4:35997] master.HMaster(1512): Stopping service threads 2023-05-29 12:57:45,238 INFO [M:0;jenkins-hbase4:35997] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 12:57:45,239 ERROR [M:0;jenkins-hbase4:35997] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 12:57:45,239 INFO [M:0;jenkins-hbase4:35997] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 12:57:45,239 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 12:57:45,240 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 12:57:45,240 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:45,240 DEBUG [M:0;jenkins-hbase4:35997] zookeeper.ZKUtil(398): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 12:57:45,240 WARN [M:0;jenkins-hbase4:35997] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 12:57:45,240 INFO [M:0;jenkins-hbase4:35997] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 12:57:45,240 INFO [M:0;jenkins-hbase4:35997] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 12:57:45,240 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:57:45,241 DEBUG [M:0;jenkins-hbase4:35997] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:57:45,241 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:45,241 DEBUG [M:0;jenkins-hbase4:35997] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:45,241 DEBUG [M:0;jenkins-hbase4:35997] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:57:45,241 DEBUG [M:0;jenkins-hbase4:35997] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:45,241 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.09 KB heapSize=45.73 KB 2023-05-29 12:57:45,252 INFO [M:0;jenkins-hbase4:35997] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.09 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2f9b603d2517418bb033cf7e52b52366 2023-05-29 12:57:45,258 DEBUG [M:0;jenkins-hbase4:35997] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2f9b603d2517418bb033cf7e52b52366 as hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2f9b603d2517418bb033cf7e52b52366 2023-05-29 12:57:45,263 INFO [M:0;jenkins-hbase4:35997] regionserver.HStore(1080): Added hdfs://localhost:39485/user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2f9b603d2517418bb033cf7e52b52366, entries=11, sequenceid=92, filesize=7.0 K 2023-05-29 12:57:45,264 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegion(2948): Finished flush of dataSize ~38.09 KB/39009, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=92, compaction requested=false 2023-05-29 12:57:45,265 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:45,265 DEBUG [M:0;jenkins-hbase4:35997] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:57:45,265 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8519a6ec-f5cd-cec1-05af-e7e52fe84124/MasterData/WALs/jenkins-hbase4.apache.org,35997,1685365017047 2023-05-29 12:57:45,268 INFO [M:0;jenkins-hbase4:35997] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 12:57:45,268 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:57:45,268 INFO [M:0;jenkins-hbase4:35997] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35997 2023-05-29 12:57:45,270 DEBUG [M:0;jenkins-hbase4:35997] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35997,1685365017047 already deleted, retry=false 2023-05-29 12:57:45,300 INFO [RS:1;jenkins-hbase4:43675] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43675,1685365018351; zookeeper connection closed. 2023-05-29 12:57:45,300 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:57:45,301 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:43675-0x1007703ccc10005, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:57:45,301 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@650a8a99] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@650a8a99 2023-05-29 12:57:45,328 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7a19ceb4] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:41957, datanodeUuid=6191f094-b773-4427-9636-a59e1a64a84c, infoPort=44303, infoSecurePort=0, ipcPort=37187, storageInfo=lv=-57;cid=testClusterID;nsid=721762925;c=1685365016397):Failed to transfer BP-1039722355-172.31.14.131-1685365016397:blk_1073741825_1001 to 127.0.0.1:45517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:45,390 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:57:45,401 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:57:45,401 INFO [M:0;jenkins-hbase4:35997] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35997,1685365017047; zookeeper connection closed. 2023-05-29 12:57:45,401 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): master:35997-0x1007703ccc10000, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:57:45,501 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:57:45,501 INFO [RS:0;jenkins-hbase4:46077] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46077,1685365017095; zookeeper connection closed. 2023-05-29 12:57:45,501 DEBUG [Listener at localhost/37799-EventThread] zookeeper.ZKWatcher(600): regionserver:46077-0x1007703ccc10001, quorum=127.0.0.1:51115, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:57:45,501 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1bb32bcc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1bb32bcc 2023-05-29 12:57:45,502 INFO [Listener at localhost/37187] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-29 12:57:45,502 WARN [Listener at localhost/37187] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:45,506 INFO [Listener at localhost/37187] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:45,609 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:57:45,609 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1039722355-172.31.14.131-1685365016397 (Datanode Uuid 6191f094-b773-4427-9636-a59e1a64a84c) service to localhost/127.0.0.1:39485 2023-05-29 12:57:45,610 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data3/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:45,610 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data4/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:45,612 WARN [Listener at localhost/37187] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:45,615 INFO [Listener at localhost/37187] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:45,717 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:57:45,717 WARN [BP-1039722355-172.31.14.131-1685365016397 heartbeating to localhost/127.0.0.1:39485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1039722355-172.31.14.131-1685365016397 (Datanode Uuid c8645d32-4136-47c0-a507-195bcd8330c7) service to localhost/127.0.0.1:39485 2023-05-29 12:57:45,718 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data7/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:45,718 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/cluster_7459c5cf-be26-ba28-6d75-aabb5745fe07/dfs/data/data8/current/BP-1039722355-172.31.14.131-1685365016397] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:45,729 INFO [Listener at localhost/37187] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:45,845 INFO [Listener at localhost/37187] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 12:57:45,879 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 12:57:45,891 INFO [Listener at localhost/37187] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 52) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:39485 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:39485 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ForkJoinPool-2-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37187 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:39485 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:39485 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:39485 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (96891934) connection to localhost/127.0.0.1:39485 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39485 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=461 (was 442) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=45 (was 87), ProcessCount=167 (was 168), AvailableMemoryMB=3429 (was 3973) 2023-05-29 12:57:45,901 INFO [Listener at localhost/37187] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=461, MaxFileDescriptor=60000, SystemLoadAverage=45, ProcessCount=167, AvailableMemoryMB=3429 2023-05-29 12:57:45,901 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 12:57:45,901 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/hadoop.log.dir so I do NOT create it in target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63 2023-05-29 12:57:45,901 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c1d0063d-aa26-6b0b-af5e-86de4671df3e/hadoop.tmp.dir so I do NOT create it in target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63 2023-05-29 12:57:45,901 INFO [Listener at localhost/37187] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9, deleteOnExit=true 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/test.cache.data in system properties and HBase conf 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/hadoop.log.dir in system properties and HBase conf 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 12:57:45,902 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 12:57:45,902 DEBUG [Listener at localhost/37187] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 12:57:45,903 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/nfs.dump.dir in system properties and HBase conf 2023-05-29 12:57:45,904 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir in system properties and HBase conf 2023-05-29 12:57:45,904 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:57:45,904 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 12:57:45,904 INFO [Listener at localhost/37187] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 12:57:45,905 WARN [Listener at localhost/37187] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:57:45,908 WARN [Listener at localhost/37187] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:57:45,908 WARN [Listener at localhost/37187] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:57:45,951 WARN [Listener at localhost/37187] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:45,953 INFO [Listener at localhost/37187] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:45,958 INFO [Listener at localhost/37187] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_37933_hdfs____.iezyld/webapp 2023-05-29 12:57:46,048 INFO [Listener at localhost/37187] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37933 2023-05-29 12:57:46,049 WARN [Listener at localhost/37187] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:57:46,052 WARN [Listener at localhost/37187] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:57:46,052 WARN [Listener at localhost/37187] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:57:46,094 WARN [Listener at localhost/33567] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:46,108 WARN [Listener at localhost/33567] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:46,110 WARN [Listener at localhost/33567] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:46,111 INFO [Listener at localhost/33567] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:46,116 INFO [Listener at localhost/33567] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_42831_datanode____npo2m7/webapp 2023-05-29 12:57:46,205 INFO [Listener at localhost/33567] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42831 2023-05-29 12:57:46,211 WARN [Listener at localhost/37367] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:46,227 WARN [Listener at localhost/37367] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:46,229 WARN [Listener at localhost/37367] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:46,230 INFO [Listener at localhost/37367] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:46,233 INFO [Listener at localhost/37367] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_35667_datanode____e02vcs/webapp 2023-05-29 12:57:46,304 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1eda1fdc055ddee4: Processing first storage report for DS-a353c9f2-cd73-48f1-95b0-380744620ea2 from datanode 993d9d5c-8067-4fe9-affc-f469d0d75b43 2023-05-29 12:57:46,304 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1eda1fdc055ddee4: from storage DS-a353c9f2-cd73-48f1-95b0-380744620ea2 node DatanodeRegistration(127.0.0.1:45779, datanodeUuid=993d9d5c-8067-4fe9-affc-f469d0d75b43, infoPort=42059, infoSecurePort=0, ipcPort=37367, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:46,304 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1eda1fdc055ddee4: Processing first storage report for DS-04ad4f1d-6c1b-481d-ae4b-48fbe5f6efa8 from datanode 993d9d5c-8067-4fe9-affc-f469d0d75b43 2023-05-29 12:57:46,304 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1eda1fdc055ddee4: from storage DS-04ad4f1d-6c1b-481d-ae4b-48fbe5f6efa8 node DatanodeRegistration(127.0.0.1:45779, datanodeUuid=993d9d5c-8067-4fe9-affc-f469d0d75b43, infoPort=42059, infoSecurePort=0, ipcPort=37367, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:46,331 INFO [Listener at localhost/37367] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35667 2023-05-29 12:57:46,337 WARN [Listener at localhost/43589] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:46,422 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:57:46,425 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x43f2589f9d4eab8b: Processing first storage report for DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78 from datanode aa38821d-2331-44dd-9614-a83f2835d9d4 2023-05-29 12:57:46,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x43f2589f9d4eab8b: from storage DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78 node DatanodeRegistration(127.0.0.1:45149, datanodeUuid=aa38821d-2331-44dd-9614-a83f2835d9d4, infoPort=33965, infoSecurePort=0, ipcPort=43589, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:46,425 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x43f2589f9d4eab8b: Processing first storage report for DS-94f15a0e-d9ae-4ca4-865c-6dc5d501df47 from datanode aa38821d-2331-44dd-9614-a83f2835d9d4 2023-05-29 12:57:46,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x43f2589f9d4eab8b: from storage DS-94f15a0e-d9ae-4ca4-865c-6dc5d501df47 node DatanodeRegistration(127.0.0.1:45149, datanodeUuid=aa38821d-2331-44dd-9614-a83f2835d9d4, infoPort=33965, infoSecurePort=0, ipcPort=43589, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:46,444 DEBUG [Listener at localhost/43589] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63 2023-05-29 12:57:46,447 INFO [Listener at localhost/43589] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/zookeeper_0, clientPort=59149, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 12:57:46,448 INFO [Listener at localhost/43589] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59149 2023-05-29 12:57:46,449 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,450 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,463 INFO [Listener at localhost/43589] util.FSUtils(471): Created version file at hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b with version=8 2023-05-29 12:57:46,463 INFO [Listener at localhost/43589] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase-staging 2023-05-29 12:57:46,465 INFO [Listener at localhost/43589] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:57:46,466 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:57:46,466 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:57:46,466 INFO [Listener at localhost/43589] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:57:46,466 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:57:46,466 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:57:46,466 INFO [Listener at localhost/43589] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:57:46,467 INFO [Listener at localhost/43589] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36655 2023-05-29 12:57:46,468 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,469 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,470 INFO [Listener at localhost/43589] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36655 connecting to ZooKeeper ensemble=127.0.0.1:59149 2023-05-29 12:57:46,478 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:366550x0, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:57:46,478 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36655-0x10077048dd10000 connected 2023-05-29 12:57:46,492 DEBUG [Listener at localhost/43589] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:57:46,493 DEBUG [Listener at localhost/43589] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:57:46,493 DEBUG [Listener at localhost/43589] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:57:46,493 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36655 2023-05-29 12:57:46,493 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36655 2023-05-29 12:57:46,494 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36655 2023-05-29 12:57:46,494 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36655 2023-05-29 12:57:46,494 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36655 2023-05-29 12:57:46,494 INFO [Listener at localhost/43589] master.HMaster(444): hbase.rootdir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b, hbase.cluster.distributed=false 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:57:46,507 INFO [Listener at localhost/43589] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:57:46,509 INFO [Listener at localhost/43589] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38253 2023-05-29 12:57:46,509 INFO [Listener at localhost/43589] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 12:57:46,510 DEBUG [Listener at localhost/43589] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 12:57:46,510 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,511 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,512 INFO [Listener at localhost/43589] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38253 connecting to ZooKeeper ensemble=127.0.0.1:59149 2023-05-29 12:57:46,516 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:382530x0, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:57:46,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38253-0x10077048dd10001 connected 2023-05-29 12:57:46,517 DEBUG [Listener at localhost/43589] zookeeper.ZKUtil(164): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:57:46,517 DEBUG [Listener at localhost/43589] zookeeper.ZKUtil(164): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:57:46,517 DEBUG [Listener at localhost/43589] zookeeper.ZKUtil(164): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:57:46,518 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38253 2023-05-29 12:57:46,518 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38253 2023-05-29 12:57:46,518 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38253 2023-05-29 12:57:46,519 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38253 2023-05-29 12:57:46,519 DEBUG [Listener at localhost/43589] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38253 2023-05-29 12:57:46,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,521 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:57:46,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,523 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:57:46,523 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:57:46,523 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:57:46,524 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36655,1685365066465 from backup master directory 2023-05-29 12:57:46,524 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:57:46,525 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,525 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:57:46,525 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:57:46,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,539 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/hbase.id with ID: 9c8d627b-14ac-44ba-8a26-f130668428ae 2023-05-29 12:57:46,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:46,554 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0d9fb230 to 127.0.0.1:59149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:57:46,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a885af1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:57:46,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:57:46,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 12:57:46,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:57:46,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store-tmp 2023-05-29 12:57:46,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:46,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:57:46,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:46,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:46,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:57:46,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:46,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:57:46,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:57:46,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36655%2C1685365066465, suffix=, logDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465, archiveDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/oldWALs, maxLogs=10 2023-05-29 12:57:46,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465/jenkins-hbase4.apache.org%2C36655%2C1685365066465.1685365066584 2023-05-29 12:57:46,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] 2023-05-29 12:57:46,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:57:46,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:46,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:57:46,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:57:46,596 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:57:46,598 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 12:57:46,599 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 12:57:46,599 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:46,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:57:46,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:57:46,604 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:57:46,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:57:46,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=866861, jitterRate=0.10227084159851074}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:57:46,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:57:46,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 12:57:46,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 12:57:46,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 12:57:46,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 12:57:46,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 12:57:46,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 12:57:46,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 12:57:46,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 12:57:46,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 12:57:46,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 12:57:46,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 12:57:46,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 12:57:46,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 12:57:46,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 12:57:46,630 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 12:57:46,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 12:57:46,632 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 12:57:46,633 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:57:46,633 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,633 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:57:46,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36655,1685365066465, sessionid=0x10077048dd10000, setting cluster-up flag (Was=false) 2023-05-29 12:57:46,639 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 12:57:46,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,649 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 12:57:46,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:46,656 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.hbase-snapshot/.tmp 2023-05-29 12:57:46,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 12:57:46,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:57:46,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:57:46,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:57:46,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:57:46,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 12:57:46,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:57:46,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685365096665 2023-05-29 12:57:46,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 12:57:46,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 12:57:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 12:57:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 12:57:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 12:57:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 12:57:46,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,675 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:57:46,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 12:57:46,676 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 12:57:46,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 12:57:46,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 12:57:46,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 12:57:46,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 12:57:46,677 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:57:46,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365066677,5,FailOnTimeoutGroup] 2023-05-29 12:57:46,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365066683,5,FailOnTimeoutGroup] 2023-05-29 12:57:46,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 12:57:46,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,704 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:57:46,705 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:57:46,705 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b 2023-05-29 12:57:46,722 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(951): ClusterId : 9c8d627b-14ac-44ba-8a26-f130668428ae 2023-05-29 12:57:46,722 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:46,723 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 12:57:46,724 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:57:46,725 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 12:57:46,725 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 12:57:46,726 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/info 2023-05-29 12:57:46,727 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:57:46,727 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:46,727 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:57:46,728 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 12:57:46,729 DEBUG [RS:0;jenkins-hbase4:38253] zookeeper.ReadOnlyZKClient(139): Connect 0x21d39946 to 127.0.0.1:59149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:57:46,730 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:57:46,730 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:57:46,731 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:46,731 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:57:46,732 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/table 2023-05-29 12:57:46,733 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:57:46,733 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:46,735 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740 2023-05-29 12:57:46,735 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740 2023-05-29 12:57:46,736 DEBUG [RS:0;jenkins-hbase4:38253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25ec09a4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:57:46,737 DEBUG [RS:0;jenkins-hbase4:38253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60ee937d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:57:46,738 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:57:46,739 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:57:46,742 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:57:46,742 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727154, jitterRate=-0.07537668943405151}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:57:46,743 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:57:46,743 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:57:46,743 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:57:46,743 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:57:46,743 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:57:46,743 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:57:46,743 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:57:46,743 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:57:46,744 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:57:46,745 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 12:57:46,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 12:57:46,746 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 12:57:46,748 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 12:57:46,748 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38253 2023-05-29 12:57:46,748 INFO [RS:0;jenkins-hbase4:38253] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 12:57:46,748 INFO [RS:0;jenkins-hbase4:38253] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 12:57:46,748 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 12:57:46,748 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36655,1685365066465 with isa=jenkins-hbase4.apache.org/172.31.14.131:38253, startcode=1685365066507 2023-05-29 12:57:46,749 DEBUG [RS:0;jenkins-hbase4:38253] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 12:57:46,751 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39575, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 12:57:46,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:46,753 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b 2023-05-29 12:57:46,753 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33567 2023-05-29 12:57:46,753 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 12:57:46,755 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:57:46,756 DEBUG [RS:0;jenkins-hbase4:38253] zookeeper.ZKUtil(162): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:46,756 WARN [RS:0;jenkins-hbase4:38253] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:57:46,756 INFO [RS:0;jenkins-hbase4:38253] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:57:46,756 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1946): logDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:46,756 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38253,1685365066507] 2023-05-29 12:57:46,760 DEBUG [RS:0;jenkins-hbase4:38253] zookeeper.ZKUtil(162): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:46,760 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 12:57:46,761 INFO [RS:0;jenkins-hbase4:38253] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 12:57:46,762 INFO [RS:0;jenkins-hbase4:38253] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 12:57:46,762 INFO [RS:0;jenkins-hbase4:38253] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:57:46,762 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,763 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 12:57:46,764 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,764 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,765 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,765 DEBUG [RS:0;jenkins-hbase4:38253] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:57:46,766 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,766 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,766 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,777 INFO [RS:0;jenkins-hbase4:38253] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 12:57:46,777 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38253,1685365066507-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:46,788 INFO [RS:0;jenkins-hbase4:38253] regionserver.Replication(203): jenkins-hbase4.apache.org,38253,1685365066507 started 2023-05-29 12:57:46,788 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38253,1685365066507, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38253, sessionid=0x10077048dd10001 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38253,1685365066507' 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 12:57:46,788 DEBUG [RS:0;jenkins-hbase4:38253] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:46,789 DEBUG [RS:0;jenkins-hbase4:38253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38253,1685365066507' 2023-05-29 12:57:46,789 DEBUG [RS:0;jenkins-hbase4:38253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 12:57:46,789 DEBUG [RS:0;jenkins-hbase4:38253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 12:57:46,789 DEBUG [RS:0;jenkins-hbase4:38253] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 12:57:46,789 INFO [RS:0;jenkins-hbase4:38253] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 12:57:46,789 INFO [RS:0;jenkins-hbase4:38253] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 12:57:46,891 INFO [RS:0;jenkins-hbase4:38253] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38253%2C1685365066507, suffix=, logDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507, archiveDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/oldWALs, maxLogs=32 2023-05-29 12:57:46,898 DEBUG [jenkins-hbase4:36655] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 12:57:46,898 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38253,1685365066507, state=OPENING 2023-05-29 12:57:46,900 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 12:57:46,901 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:46,901 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38253,1685365066507}] 2023-05-29 12:57:46,901 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:57:46,902 INFO [RS:0;jenkins-hbase4:38253] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 2023-05-29 12:57:46,903 DEBUG [RS:0;jenkins-hbase4:38253] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] 2023-05-29 12:57:47,056 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:47,056 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 12:57:47,058 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37932, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 12:57:47,062 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 12:57:47,062 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:57:47,064 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38253%2C1685365066507.meta, suffix=.meta, logDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507, archiveDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/oldWALs, maxLogs=32 2023-05-29 12:57:47,078 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.meta.1685365067066.meta 2023-05-29 12:57:47,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] 2023-05-29 12:57:47,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:57:47,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 12:57:47,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 12:57:47,078 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 12:57:47,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 12:57:47,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:47,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 12:57:47,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 12:57:47,081 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:57:47,082 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/info 2023-05-29 12:57:47,082 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/info 2023-05-29 12:57:47,082 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:57:47,083 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:47,083 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:57:47,084 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:57:47,084 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:57:47,085 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:57:47,085 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:47,086 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:57:47,089 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/table 2023-05-29 12:57:47,089 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740/table 2023-05-29 12:57:47,089 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:57:47,090 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:47,091 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740 2023-05-29 12:57:47,092 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/meta/1588230740 2023-05-29 12:57:47,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:57:47,097 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:57:47,098 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=806908, jitterRate=0.026036620140075684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:57:47,098 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:57:47,100 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685365067056 2023-05-29 12:57:47,105 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38253,1685365066507, state=OPEN 2023-05-29 12:57:47,107 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 12:57:47,107 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:57:47,109 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 12:57:47,109 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 12:57:47,111 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 12:57:47,111 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38253,1685365066507 in 206 msec 2023-05-29 12:57:47,114 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 12:57:47,114 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 366 msec 2023-05-29 12:57:47,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 457 msec 2023-05-29 12:57:47,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685365067117, completionTime=-1 2023-05-29 12:57:47,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 12:57:47,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 12:57:47,123 DEBUG [hconnection-0x5dee2e00-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:57:47,126 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37944, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:57:47,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 12:57:47,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685365127127 2023-05-29 12:57:47,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685365187127 2023-05-29 12:57:47,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 10 msec 2023-05-29 12:57:47,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36655,1685365066465-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:47,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36655,1685365066465-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:47,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36655,1685365066465-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:47,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36655, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:47,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 12:57:47,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 12:57:47,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:57:47,138 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 12:57:47,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 12:57:47,141 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:57:47,142 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:57:47,144 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,145 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0 empty. 2023-05-29 12:57:47,145 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,145 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 12:57:47,161 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 12:57:47,162 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c3dc8bad1449634a23d44482a46b7bd0, NAME => 'hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp 2023-05-29 12:57:47,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:47,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c3dc8bad1449634a23d44482a46b7bd0, disabling compactions & flushes 2023-05-29 12:57:47,171 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. after waiting 0 ms 2023-05-29 12:57:47,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,171 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c3dc8bad1449634a23d44482a46b7bd0: 2023-05-29 12:57:47,174 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:57:47,175 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365067174"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365067174"}]},"ts":"1685365067174"} 2023-05-29 12:57:47,177 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:57:47,178 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:57:47,179 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365067178"}]},"ts":"1685365067178"} 2023-05-29 12:57:47,180 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 12:57:47,188 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c3dc8bad1449634a23d44482a46b7bd0, ASSIGN}] 2023-05-29 12:57:47,190 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c3dc8bad1449634a23d44482a46b7bd0, ASSIGN 2023-05-29 12:57:47,191 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c3dc8bad1449634a23d44482a46b7bd0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38253,1685365066507; forceNewPlan=false, retain=false 2023-05-29 12:57:47,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c3dc8bad1449634a23d44482a46b7bd0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:47,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365067342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365067342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365067342"}]},"ts":"1685365067342"} 2023-05-29 12:57:47,345 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c3dc8bad1449634a23d44482a46b7bd0, server=jenkins-hbase4.apache.org,38253,1685365066507}] 2023-05-29 12:57:47,501 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c3dc8bad1449634a23d44482a46b7bd0, NAME => 'hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:57:47,502 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,502 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:47,502 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,502 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,503 INFO [StoreOpener-c3dc8bad1449634a23d44482a46b7bd0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,504 DEBUG [StoreOpener-c3dc8bad1449634a23d44482a46b7bd0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0/info 2023-05-29 12:57:47,504 DEBUG [StoreOpener-c3dc8bad1449634a23d44482a46b7bd0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0/info 2023-05-29 12:57:47,505 INFO [StoreOpener-c3dc8bad1449634a23d44482a46b7bd0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c3dc8bad1449634a23d44482a46b7bd0 columnFamilyName info 2023-05-29 12:57:47,507 INFO [StoreOpener-c3dc8bad1449634a23d44482a46b7bd0-1] regionserver.HStore(310): Store=c3dc8bad1449634a23d44482a46b7bd0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:47,508 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:57:47,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/hbase/namespace/c3dc8bad1449634a23d44482a46b7bd0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:57:47,514 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c3dc8bad1449634a23d44482a46b7bd0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852262, jitterRate=0.08370757102966309}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:57:47,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c3dc8bad1449634a23d44482a46b7bd0: 2023-05-29 12:57:47,515 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0., pid=6, masterSystemTime=1685365067498 2023-05-29 12:57:47,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,517 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:57:47,518 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c3dc8bad1449634a23d44482a46b7bd0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:47,518 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365067518"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365067518"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365067518"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365067518"}]},"ts":"1685365067518"} 2023-05-29 12:57:47,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 12:57:47,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c3dc8bad1449634a23d44482a46b7bd0, server=jenkins-hbase4.apache.org,38253,1685365066507 in 175 msec 2023-05-29 12:57:47,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 12:57:47,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c3dc8bad1449634a23d44482a46b7bd0, ASSIGN in 335 msec 2023-05-29 12:57:47,526 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:57:47,527 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365067526"}]},"ts":"1685365067526"} 2023-05-29 12:57:47,528 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 12:57:47,530 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:57:47,532 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 393 msec 2023-05-29 12:57:47,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 12:57:47,541 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:57:47,541 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:47,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 12:57:47,554 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:57:47,559 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-29 12:57:47,568 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 12:57:47,575 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:57:47,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-29 12:57:47,593 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 12:57:47,595 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 12:57:47,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.070sec 2023-05-29 12:57:47,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 12:57:47,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 12:57:47,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 12:57:47,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36655,1685365066465-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 12:57:47,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36655,1685365066465-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 12:57:47,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 12:57:47,621 DEBUG [Listener at localhost/43589] zookeeper.ReadOnlyZKClient(139): Connect 0x4b0747b3 to 127.0.0.1:59149 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:57:47,628 DEBUG [Listener at localhost/43589] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4bb849aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:57:47,629 DEBUG [hconnection-0x7cb40a7a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:57:47,631 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37958, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:57:47,632 INFO [Listener at localhost/43589] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:57:47,633 INFO [Listener at localhost/43589] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:57:47,636 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 12:57:47,636 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:57:47,636 INFO [Listener at localhost/43589] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 12:57:47,636 INFO [Listener at localhost/43589] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-29 12:57:47,636 INFO [Listener at localhost/43589] wal.TestLogRolling(432): Replication=2 2023-05-29 12:57:47,638 DEBUG [Listener at localhost/43589] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 12:57:47,640 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 12:57:47,642 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 12:57:47,642 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 12:57:47,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:57:47,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-29 12:57:47,646 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:57:47,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-29 12:57:47,647 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:57:47,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:57:47,648 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,649 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de empty. 2023-05-29 12:57:47,649 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,649 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-29 12:57:47,659 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-29 12:57:47,660 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => fb3e726092d5bb945ae2f1b5a14539de, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/.tmp 2023-05-29 12:57:47,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:47,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing fb3e726092d5bb945ae2f1b5a14539de, disabling compactions & flushes 2023-05-29 12:57:47,667 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:47,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:47,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. after waiting 0 ms 2023-05-29 12:57:47,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:47,667 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:47,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for fb3e726092d5bb945ae2f1b5a14539de: 2023-05-29 12:57:47,670 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:57:47,671 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685365067670"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365067670"}]},"ts":"1685365067670"} 2023-05-29 12:57:47,672 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:57:47,673 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:57:47,673 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365067673"}]},"ts":"1685365067673"} 2023-05-29 12:57:47,675 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-29 12:57:47,679 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=fb3e726092d5bb945ae2f1b5a14539de, ASSIGN}] 2023-05-29 12:57:47,681 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=fb3e726092d5bb945ae2f1b5a14539de, ASSIGN 2023-05-29 12:57:47,682 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=fb3e726092d5bb945ae2f1b5a14539de, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38253,1685365066507; forceNewPlan=false, retain=false 2023-05-29 12:57:47,833 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=fb3e726092d5bb945ae2f1b5a14539de, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:47,833 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685365067833"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365067833"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365067833"}]},"ts":"1685365067833"} 2023-05-29 12:57:47,835 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure fb3e726092d5bb945ae2f1b5a14539de, server=jenkins-hbase4.apache.org,38253,1685365066507}] 2023-05-29 12:57:47,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:47,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb3e726092d5bb945ae2f1b5a14539de, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:57:47,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:57:47,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,994 INFO [StoreOpener-fb3e726092d5bb945ae2f1b5a14539de-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,995 DEBUG [StoreOpener-fb3e726092d5bb945ae2f1b5a14539de-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de/info 2023-05-29 12:57:47,995 DEBUG [StoreOpener-fb3e726092d5bb945ae2f1b5a14539de-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de/info 2023-05-29 12:57:47,996 INFO [StoreOpener-fb3e726092d5bb945ae2f1b5a14539de-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb3e726092d5bb945ae2f1b5a14539de columnFamilyName info 2023-05-29 12:57:47,996 INFO [StoreOpener-fb3e726092d5bb945ae2f1b5a14539de-1] regionserver.HStore(310): Store=fb3e726092d5bb945ae2f1b5a14539de/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:57:47,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:47,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:48,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:57:48,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/data/default/TestLogRolling-testLogRollOnPipelineRestart/fb3e726092d5bb945ae2f1b5a14539de/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:57:48,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb3e726092d5bb945ae2f1b5a14539de; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=777723, jitterRate=-0.011075153946876526}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:57:48,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb3e726092d5bb945ae2f1b5a14539de: 2023-05-29 12:57:48,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de., pid=11, masterSystemTime=1685365067988 2023-05-29 12:57:48,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:48,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:48,008 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=fb3e726092d5bb945ae2f1b5a14539de, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:57:48,008 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685365068008"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365068008"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365068008"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365068008"}]},"ts":"1685365068008"} 2023-05-29 12:57:48,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 12:57:48,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure fb3e726092d5bb945ae2f1b5a14539de, server=jenkins-hbase4.apache.org,38253,1685365066507 in 175 msec 2023-05-29 12:57:48,015 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 12:57:48,015 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=fb3e726092d5bb945ae2f1b5a14539de, ASSIGN in 334 msec 2023-05-29 12:57:48,016 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:57:48,016 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365068016"}]},"ts":"1685365068016"} 2023-05-29 12:57:48,018 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-29 12:57:48,020 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:57:48,022 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 379 msec 2023-05-29 12:57:50,420 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 12:57:52,761 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-29 12:57:57,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:57:57,648 INFO [Listener at localhost/43589] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-29 12:57:57,651 DEBUG [Listener at localhost/43589] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-29 12:57:57,651 DEBUG [Listener at localhost/43589] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:57:59,656 INFO [Listener at localhost/43589] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 2023-05-29 12:57:59,657 WARN [Listener at localhost/43589] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:59,658 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:59,659 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:59,660 WARN [DataStreamer for file /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465/jenkins-hbase4.apache.org%2C36655%2C1685365066465.1685365066584 block BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]) is bad. 2023-05-29 12:57:59,659 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:59,660 WARN [DataStreamer for file /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.meta.1685365067066.meta block BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]) is bad. 2023-05-29 12:57:59,660 WARN [DataStreamer for file /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 block BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45149,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]) is bad. 2023-05-29 12:57:59,664 INFO [Listener at localhost/43589] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:59,670 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:39174 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45779:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39174 dst: /127.0.0.1:45779 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45779 remote=/127.0.0.1:39174]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,671 WARN [PacketResponder: BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45779]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,671 WARN [PacketResponder: BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45779]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,671 WARN [PacketResponder: BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45779]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,670 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:39172 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45779:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39172 dst: /127.0.0.1:45779 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45779 remote=/127.0.0.1:39172]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,670 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-514130651_17 at /127.0.0.1:39146 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45779:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39146 dst: /127.0.0.1:45779 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45779 remote=/127.0.0.1:39146]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,674 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:57020 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45149:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57020 dst: /127.0.0.1:45149 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,673 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:57016 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45149:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57016 dst: /127.0.0.1:45149 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,672 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-514130651_17 at /127.0.0.1:56984 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45149:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56984 dst: /127.0.0.1:45149 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:57:59,771 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:57:59,771 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-503405845-172.31.14.131-1685365065911 (Datanode Uuid aa38821d-2331-44dd-9614-a83f2835d9d4) service to localhost/127.0.0.1:33567 2023-05-29 12:57:59,771 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data3/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:59,772 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data4/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:57:59,778 WARN [Listener at localhost/43589] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:57:59,781 WARN [Listener at localhost/43589] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:57:59,782 INFO [Listener at localhost/43589] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:57:59,792 INFO [Listener at localhost/43589] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_33277_datanode____.6646hz/webapp 2023-05-29 12:57:59,882 INFO [Listener at localhost/43589] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33277 2023-05-29 12:57:59,888 WARN [Listener at localhost/38507] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:57:59,894 WARN [Listener at localhost/38507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:57:59,894 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:59,894 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:59,894 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:57:59,900 INFO [Listener at localhost/38507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:57:59,960 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5fad60f5f5f721e3: Processing first storage report for DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78 from datanode aa38821d-2331-44dd-9614-a83f2835d9d4 2023-05-29 12:57:59,960 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5fad60f5f5f721e3: from storage DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78 node DatanodeRegistration(127.0.0.1:40875, datanodeUuid=aa38821d-2331-44dd-9614-a83f2835d9d4, infoPort=34667, infoSecurePort=0, ipcPort=38507, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:57:59,960 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5fad60f5f5f721e3: Processing first storage report for DS-94f15a0e-d9ae-4ca4-865c-6dc5d501df47 from datanode aa38821d-2331-44dd-9614-a83f2835d9d4 2023-05-29 12:57:59,960 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5fad60f5f5f721e3: from storage DS-94f15a0e-d9ae-4ca4-865c-6dc5d501df47 node DatanodeRegistration(127.0.0.1:40875, datanodeUuid=aa38821d-2331-44dd-9614-a83f2835d9d4, infoPort=34667, infoSecurePort=0, ipcPort=38507, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:00,002 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:42906 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45779:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42906 dst: /127.0.0.1:45779 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:00,003 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:58:00,002 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-514130651_17 at /127.0.0.1:42902 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45779:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42902 dst: /127.0.0.1:45779 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:00,002 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:42904 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45779:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42904 dst: /127.0.0.1:45779 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:00,004 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-503405845-172.31.14.131-1685365065911 (Datanode Uuid 993d9d5c-8067-4fe9-affc-f469d0d75b43) service to localhost/127.0.0.1:33567 2023-05-29 12:58:00,005 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:00,006 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data2/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:00,011 WARN [Listener at localhost/38507] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:58:00,014 WARN [Listener at localhost/38507] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:58:00,015 INFO [Listener at localhost/38507] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:58:00,020 INFO [Listener at localhost/38507] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_37559_datanode____.t5rnc4/webapp 2023-05-29 12:58:00,110 INFO [Listener at localhost/38507] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37559 2023-05-29 12:58:00,119 WARN [Listener at localhost/33255] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:58:00,185 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbad7cef31389afff: Processing first storage report for DS-a353c9f2-cd73-48f1-95b0-380744620ea2 from datanode 993d9d5c-8067-4fe9-affc-f469d0d75b43 2023-05-29 12:58:00,186 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbad7cef31389afff: from storage DS-a353c9f2-cd73-48f1-95b0-380744620ea2 node DatanodeRegistration(127.0.0.1:39869, datanodeUuid=993d9d5c-8067-4fe9-affc-f469d0d75b43, infoPort=37973, infoSecurePort=0, ipcPort=33255, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:58:00,186 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbad7cef31389afff: Processing first storage report for DS-04ad4f1d-6c1b-481d-ae4b-48fbe5f6efa8 from datanode 993d9d5c-8067-4fe9-affc-f469d0d75b43 2023-05-29 12:58:00,186 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbad7cef31389afff: from storage DS-04ad4f1d-6c1b-481d-ae4b-48fbe5f6efa8 node DatanodeRegistration(127.0.0.1:39869, datanodeUuid=993d9d5c-8067-4fe9-affc-f469d0d75b43, infoPort=37973, infoSecurePort=0, ipcPort=33255, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:01,123 INFO [Listener at localhost/33255] wal.TestLogRolling(481): Data Nodes restarted 2023-05-29 12:58:01,124 INFO [Listener at localhost/33255] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-29 12:58:01,125 WARN [RS:0;jenkins-hbase4:38253.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:01,126 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38253%2C1685365066507:(num 1685365066892) roll requested 2023-05-29 12:58:01,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38253] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:01,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38253] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:37958 deadline: 1685365091125, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-29 12:58:01,135 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 newFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 2023-05-29 12:58:01,135 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-29 12:58:01,135 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 2023-05-29 12:58:01,136 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39869,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK], DatanodeInfoWithStorage[127.0.0.1:40875,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] 2023-05-29 12:58:01,136 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:01,136 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 is not closed yet, will try archiving it next time 2023-05-29 12:58:01,136 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:13,154 INFO [Listener at localhost/33255] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-29 12:58:15,156 WARN [Listener at localhost/33255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:58:15,157 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:58:15,158 WARN [DataStreamer for file /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 block BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39869,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK], DatanodeInfoWithStorage[127.0.0.1:40875,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39869,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]) is bad. 2023-05-29 12:58:15,161 INFO [Listener at localhost/33255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:58:15,163 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:46940 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40875:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46940 dst: /127.0.0.1:40875 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40875 remote=/127.0.0.1:46940]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:15,163 WARN [PacketResponder: BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40875]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:15,165 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:46280 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:39869:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46280 dst: /127.0.0.1:39869 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:15,186 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-503405845-172.31.14.131-1685365065911 (Datanode Uuid 993d9d5c-8067-4fe9-affc-f469d0d75b43) service to localhost/127.0.0.1:33567 2023-05-29 12:58:15,187 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:15,187 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data2/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:15,272 WARN [Listener at localhost/33255] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:58:15,274 WARN [Listener at localhost/33255] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:58:15,275 INFO [Listener at localhost/33255] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:58:15,280 INFO [Listener at localhost/33255] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_33847_datanode____qfzqo2/webapp 2023-05-29 12:58:15,372 INFO [Listener at localhost/33255] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33847 2023-05-29 12:58:15,384 WARN [Listener at localhost/33879] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:58:15,390 WARN [Listener at localhost/33879] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:58:15,390 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:58:15,395 INFO [Listener at localhost/33879] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:58:15,456 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe0351e54336065a6: Processing first storage report for DS-a353c9f2-cd73-48f1-95b0-380744620ea2 from datanode 993d9d5c-8067-4fe9-affc-f469d0d75b43 2023-05-29 12:58:15,457 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe0351e54336065a6: from storage DS-a353c9f2-cd73-48f1-95b0-380744620ea2 node DatanodeRegistration(127.0.0.1:33433, datanodeUuid=993d9d5c-8067-4fe9-affc-f469d0d75b43, infoPort=34979, infoSecurePort=0, ipcPort=33879, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:15,457 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe0351e54336065a6: Processing first storage report for DS-04ad4f1d-6c1b-481d-ae4b-48fbe5f6efa8 from datanode 993d9d5c-8067-4fe9-affc-f469d0d75b43 2023-05-29 12:58:15,457 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe0351e54336065a6: from storage DS-04ad4f1d-6c1b-481d-ae4b-48fbe5f6efa8 node DatanodeRegistration(127.0.0.1:33433, datanodeUuid=993d9d5c-8067-4fe9-affc-f469d0d75b43, infoPort=34979, infoSecurePort=0, ipcPort=33879, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:15,498 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-163159498_17 at /127.0.0.1:58686 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40875:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58686 dst: /127.0.0.1:40875 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:15,500 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:58:15,500 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-503405845-172.31.14.131-1685365065911 (Datanode Uuid aa38821d-2331-44dd-9614-a83f2835d9d4) service to localhost/127.0.0.1:33567 2023-05-29 12:58:15,500 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data3/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:15,501 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data4/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:15,506 WARN [Listener at localhost/33879] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:58:15,508 WARN [Listener at localhost/33879] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:58:15,509 INFO [Listener at localhost/33879] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:58:15,513 INFO [Listener at localhost/33879] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/java.io.tmpdir/Jetty_localhost_45813_datanode____5eez6o/webapp 2023-05-29 12:58:15,607 INFO [Listener at localhost/33879] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45813 2023-05-29 12:58:15,614 WARN [Listener at localhost/44735] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:58:15,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeca7c419de16889: Processing first storage report for DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78 from datanode aa38821d-2331-44dd-9614-a83f2835d9d4 2023-05-29 12:58:15,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeca7c419de16889: from storage DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78 node DatanodeRegistration(127.0.0.1:45777, datanodeUuid=aa38821d-2331-44dd-9614-a83f2835d9d4, infoPort=45249, infoSecurePort=0, ipcPort=44735, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:15,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeca7c419de16889: Processing first storage report for DS-94f15a0e-d9ae-4ca4-865c-6dc5d501df47 from datanode aa38821d-2331-44dd-9614-a83f2835d9d4 2023-05-29 12:58:15,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeca7c419de16889: from storage DS-94f15a0e-d9ae-4ca4-865c-6dc5d501df47 node DatanodeRegistration(127.0.0.1:45777, datanodeUuid=aa38821d-2331-44dd-9614-a83f2835d9d4, infoPort=45249, infoSecurePort=0, ipcPort=44735, storageInfo=lv=-57;cid=testClusterID;nsid=1076446928;c=1685365065911), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:58:16,617 INFO [Listener at localhost/44735] wal.TestLogRolling(498): Data Nodes restarted 2023-05-29 12:58:16,619 INFO [Listener at localhost/44735] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-29 12:58:16,620 WARN [RS:0;jenkins-hbase4:38253.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40875,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,621 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38253%2C1685365066507:(num 1685365081126) roll requested 2023-05-29 12:58:16,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38253] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40875,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38253] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:37958 deadline: 1685365106619, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-29 12:58:16,628 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 newFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 2023-05-29 12:58:16,628 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-29 12:58:16,628 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 2023-05-29 12:58:16,629 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33433,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK], DatanodeInfoWithStorage[127.0.0.1:45777,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] 2023-05-29 12:58:16,629 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40875,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,629 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 is not closed yet, will try archiving it next time 2023-05-29 12:58:16,629 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40875,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,666 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,666 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36655%2C1685365066465:(num 1685365066584) roll requested 2023-05-29 12:58:16,666 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,667 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,674 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-29 12:58:16,674 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465/jenkins-hbase4.apache.org%2C36655%2C1685365066465.1685365066584 with entries=88, filesize=43.79 KB; new WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465/jenkins-hbase4.apache.org%2C36655%2C1685365066465.1685365096666 2023-05-29 12:58:16,674 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33433,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK], DatanodeInfoWithStorage[127.0.0.1:45777,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]] 2023-05-29 12:58:16,674 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465/jenkins-hbase4.apache.org%2C36655%2C1685365066465.1685365066584 is not closed yet, will try archiving it next time 2023-05-29 12:58:16,674 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:16,674 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465/jenkins-hbase4.apache.org%2C36655%2C1685365066465.1685365066584; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:28,686 DEBUG [Listener at localhost/44735] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 newFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 2023-05-29 12:58:28,688 INFO [Listener at localhost/44735] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 2023-05-29 12:58:28,692 DEBUG [Listener at localhost/44735] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45777,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:33433,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] 2023-05-29 12:58:28,692 DEBUG [Listener at localhost/44735] wal.AbstractFSWAL(716): hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 is not closed yet, will try archiving it next time 2023-05-29 12:58:28,692 DEBUG [Listener at localhost/44735] wal.TestLogRolling(512): recovering lease for hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 2023-05-29 12:58:28,693 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 2023-05-29 12:58:28,696 WARN [IPC Server handler 3 on default port 33567] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1016 2023-05-29 12:58:28,698 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 after 5ms 2023-05-29 12:58:29,701 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@40f56831] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-503405845-172.31.14.131-1685365065911:blk_1073741832_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:45777,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data4/current/BP-503405845-172.31.14.131-1685365065911/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:32,699 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 after 4006ms 2023-05-29 12:58:32,699 DEBUG [Listener at localhost/44735] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365066892 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685365067514/Put/vlen=175/seqid=0] 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #4: [default/info:d/1685365067549/Put/vlen=9/seqid=0] 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #5: [hbase/info:d/1685365067572/Put/vlen=7/seqid=0] 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685365068004/Put/vlen=231/seqid=0] 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #4: [row1002/info:/1685365077654/Put/vlen=1045/seqid=0] 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.ProtobufLogReader(420): EOF at position 2160 2023-05-29 12:58:32,708 DEBUG [Listener at localhost/44735] wal.TestLogRolling(512): recovering lease for hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 2023-05-29 12:58:32,708 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 2023-05-29 12:58:32,709 WARN [IPC Server handler 2 on default port 33567] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-29 12:58:32,709 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 after 1ms 2023-05-29 12:58:33,680 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@952bc0c] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-503405845-172.31.14.131-1685365065911:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:33433,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current/BP-503405845-172.31.14.131-1685365065911/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current/BP-503405845-172.31.14.131-1685365065911/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-29 12:58:36,710 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 after 4002ms 2023-05-29 12:58:36,710 DEBUG [Listener at localhost/44735] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365081126 2023-05-29 12:58:36,714 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #6: [row1003/info:/1685365091149/Put/vlen=1045/seqid=0] 2023-05-29 12:58:36,714 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #7: [row1004/info:/1685365093155/Put/vlen=1045/seqid=0] 2023-05-29 12:58:36,714 DEBUG [Listener at localhost/44735] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-29 12:58:36,715 DEBUG [Listener at localhost/44735] wal.TestLogRolling(512): recovering lease for hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 2023-05-29 12:58:36,715 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 2023-05-29 12:58:36,715 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 after 0ms 2023-05-29 12:58:36,715 DEBUG [Listener at localhost/44735] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365096621 2023-05-29 12:58:36,719 DEBUG [Listener at localhost/44735] wal.TestLogRolling(522): #9: [row1005/info:/1685365106674/Put/vlen=1045/seqid=0] 2023-05-29 12:58:36,719 DEBUG [Listener at localhost/44735] wal.TestLogRolling(512): recovering lease for hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 2023-05-29 12:58:36,719 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 2023-05-29 12:58:36,719 WARN [IPC Server handler 0 on default port 33567] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-29 12:58:36,719 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 after 0ms 2023-05-29 12:58:37,680 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-514130651_17 at /127.0.0.1:49500 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:45777:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49500 dst: /127.0.0.1:45777 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45777 remote=/127.0.0.1:49500]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:37,681 WARN [ResponseProcessor for block BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 12:58:37,681 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-514130651_17 at /127.0.0.1:57640 [Receiving block BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:33433:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57640 dst: /127.0.0.1:33433 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:37,682 WARN [DataStreamer for file /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 block BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45777,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK], DatanodeInfoWithStorage[127.0.0.1:33433,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45777,DS-d98e0f63-897c-4c4e-8fa3-2a3deff8bb78,DISK]) is bad. 2023-05-29 12:58:37,687 WARN [DataStreamer for file /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 block BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,720 INFO [Listener at localhost/44735] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 after 4001ms 2023-05-29 12:58:40,720 DEBUG [Listener at localhost/44735] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 2023-05-29 12:58:40,724 DEBUG [Listener at localhost/44735] wal.ProtobufLogReader(420): EOF at position 83 2023-05-29 12:58:40,725 INFO [Listener at localhost/44735] regionserver.HRegion(2745): Flushing fb3e726092d5bb945ae2f1b5a14539de 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-29 12:58:40,726 WARN [RS:0;jenkins-hbase4:38253.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,726 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38253%2C1685365066507:(num 1685365108676) roll requested 2023-05-29 12:58:40,726 DEBUG [Listener at localhost/44735] regionserver.HRegion(2446): Flush status journal for fb3e726092d5bb945ae2f1b5a14539de: 2023-05-29 12:58:40,726 INFO [Listener at localhost/44735] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,727 INFO [Listener at localhost/44735] regionserver.HRegion(2745): Flushing c3dc8bad1449634a23d44482a46b7bd0 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 12:58:40,728 DEBUG [Listener at localhost/44735] regionserver.HRegion(2446): Flush status journal for c3dc8bad1449634a23d44482a46b7bd0: 2023-05-29 12:58:40,728 INFO [Listener at localhost/44735] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,729 INFO [Listener at localhost/44735] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-05-29 12:58:40,729 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,730 DEBUG [Listener at localhost/44735] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-29 12:58:40,730 INFO [Listener at localhost/44735] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,741 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 12:58:40,741 INFO [Listener at localhost/44735] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 12:58:40,741 DEBUG [Listener at localhost/44735] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b0747b3 to 127.0.0.1:59149 2023-05-29 12:58:40,742 DEBUG [Listener at localhost/44735] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:58:40,742 DEBUG [Listener at localhost/44735] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 12:58:40,742 DEBUG [Listener at localhost/44735] util.JVMClusterUtil(257): Found active master hash=1831354503, stopped=false 2023-05-29 12:58:40,742 INFO [Listener at localhost/44735] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:58:40,743 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 newFile=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365120726 2023-05-29 12:58:40,743 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-29 12:58:40,743 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365120726 2023-05-29 12:58:40,743 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,743 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676 failed. Cause="Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-29 12:58:40,743 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,744 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,745 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:58:40,745 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,746 INFO [Listener at localhost/44735] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 12:58:40,746 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45779,DS-a353c9f2-cd73-48f1-95b0-380744620ea2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,746 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:58:40,746 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:58:40,746 DEBUG [Listener at localhost/44735] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0d9fb230 to 127.0.0.1:59149 2023-05-29 12:58:40,747 DEBUG [Listener at localhost/44735] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:58:40,747 INFO [Listener at localhost/44735] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38253,1685365066507' ***** 2023-05-29 12:58:40,747 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:58:40,747 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:40,747 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:58:40,747 INFO [Listener at localhost/44735] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 12:58:40,747 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:58:40,748 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-29 12:58:40,748 INFO [RS:0;jenkins-hbase4:38253] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 12:58:40,748 INFO [RS:0;jenkins-hbase4:38253] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 12:58:40,748 ERROR [regionserver/jenkins-hbase4:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,38253,1685365066507: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,748 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 12:58:40,748 ERROR [regionserver/jenkins-hbase4:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-29 12:58:40,748 INFO [RS:0;jenkins-hbase4:38253] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 12:58:40,749 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-29 12:58:40,749 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(3303): Received CLOSE for fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:58:40,749 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(3303): Received CLOSE for c3dc8bad1449634a23d44482a46b7bd0 2023-05-29 12:58:40,749 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1141): aborting server jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:58:40,749 DEBUG [RS:0;jenkins-hbase4:38253] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x21d39946 to 127.0.0.1:59149 2023-05-29 12:58:40,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb3e726092d5bb945ae2f1b5a14539de, disabling compactions & flushes 2023-05-29 12:58:40,749 DEBUG [RS:0;jenkins-hbase4:38253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:58:40,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:58:40,749 INFO [RS:0;jenkins-hbase4:38253] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 12:58:40,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:58:40,749 INFO [RS:0;jenkins-hbase4:38253] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 12:58:40,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. after waiting 0 ms 2023-05-29 12:58:40,750 INFO [RS:0;jenkins-hbase4:38253] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 12:58:40,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:58:40,750 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 12:58:40,750 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-29 12:58:40,750 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-29 12:58:40,750 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 12:58:40,750 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-29 12:58:40,750 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:58:40,750 INFO [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1050148864, "init": 513802240, "max": 2051014656, "used": 406548936 }, "NonHeapMemoryUsage": { "committed": 139288576, "init": 2555904, "max": -1, "used": 136713768 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-29 12:58:40,750 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:58:40,750 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1478): Online Regions={fb3e726092d5bb945ae2f1b5a14539de=TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de., c3dc8bad1449634a23d44482a46b7bd0=hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0., 1588230740=hbase:meta,,1.1588230740} 2023-05-29 12:58:40,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:58:40,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:58:40,751 DEBUG [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1504): Waiting on 1588230740, c3dc8bad1449634a23d44482a46b7bd0, fb3e726092d5bb945ae2f1b5a14539de 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb3e726092d5bb945ae2f1b5a14539de: 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685365067642.fb3e726092d5bb945ae2f1b5a14539de. 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c3dc8bad1449634a23d44482a46b7bd0, disabling compactions & flushes 2023-05-29 12:58:40,751 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 3024 in region hbase:meta,,1.1588230740 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 12:58:40,751 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36655] master.MasterRpcServices(609): jenkins-hbase4.apache.org,38253,1685365066507 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,38253,1685365066507: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/WALs/jenkins-hbase4.apache.org,38253,1685365066507/jenkins-hbase4.apache.org%2C38253%2C1685365066507.1685365108676, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-503405845-172.31.14.131-1685365065911:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 12:58:40,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:58:40,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:58:40,751 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:58:40,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:58:40,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. after waiting 0 ms 2023-05-29 12:58:40,752 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 12:58:40,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:58:40,752 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:58:40,752 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38253%2C1685365066507.meta:.meta(num 1685365067066) roll requested 2023-05-29 12:58:40,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:58:40,752 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-29 12:58:40,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c3dc8bad1449634a23d44482a46b7bd0: 2023-05-29 12:58:40,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685365067137.c3dc8bad1449634a23d44482a46b7bd0. 2023-05-29 12:58:40,768 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-29 12:58:40,768 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-29 12:58:40,768 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:58:40,951 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38253,1685365066507; all regions closed. 2023-05-29 12:58:40,951 DEBUG [RS:0;jenkins-hbase4:38253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:58:40,951 INFO [RS:0;jenkins-hbase4:38253] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:58:40,951 INFO [RS:0;jenkins-hbase4:38253] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 12:58:40,951 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:58:40,952 INFO [RS:0;jenkins-hbase4:38253] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38253 2023-05-29 12:58:40,956 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:58:40,956 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38253,1685365066507 2023-05-29 12:58:40,956 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:58:40,956 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38253,1685365066507] 2023-05-29 12:58:40,956 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38253,1685365066507; numProcessing=1 2023-05-29 12:58:40,958 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38253,1685365066507 already deleted, retry=false 2023-05-29 12:58:40,958 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38253,1685365066507 expired; onlineServers=0 2023-05-29 12:58:40,958 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36655,1685365066465' ***** 2023-05-29 12:58:40,958 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 12:58:40,958 DEBUG [M:0;jenkins-hbase4:36655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6917d9fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:58:40,958 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:58:40,958 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36655,1685365066465; all regions closed. 2023-05-29 12:58:40,958 DEBUG [M:0;jenkins-hbase4:36655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:58:40,959 DEBUG [M:0;jenkins-hbase4:36655] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 12:58:40,959 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 12:58:40,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365066683] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365066683,5,FailOnTimeoutGroup] 2023-05-29 12:58:40,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365066677] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365066677,5,FailOnTimeoutGroup] 2023-05-29 12:58:40,959 DEBUG [M:0;jenkins-hbase4:36655] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 12:58:40,960 INFO [M:0;jenkins-hbase4:36655] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 12:58:40,960 INFO [M:0;jenkins-hbase4:36655] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 12:58:40,960 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 12:58:40,960 INFO [M:0;jenkins-hbase4:36655] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 12:58:40,960 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:40,961 DEBUG [M:0;jenkins-hbase4:36655] master.HMaster(1512): Stopping service threads 2023-05-29 12:58:40,961 INFO [M:0;jenkins-hbase4:36655] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 12:58:40,961 ERROR [M:0;jenkins-hbase4:36655] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 12:58:40,961 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:58:40,961 INFO [M:0;jenkins-hbase4:36655] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 12:58:40,961 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 12:58:40,962 DEBUG [M:0;jenkins-hbase4:36655] zookeeper.ZKUtil(398): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 12:58:40,962 WARN [M:0;jenkins-hbase4:36655] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 12:58:40,962 INFO [M:0;jenkins-hbase4:36655] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 12:58:40,962 INFO [M:0;jenkins-hbase4:36655] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 12:58:40,963 DEBUG [M:0;jenkins-hbase4:36655] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:58:40,963 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:40,963 DEBUG [M:0;jenkins-hbase4:36655] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:40,963 DEBUG [M:0;jenkins-hbase4:36655] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:58:40,963 DEBUG [M:0;jenkins-hbase4:36655] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:40,963 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.16 KB heapSize=45.78 KB 2023-05-29 12:58:40,979 INFO [M:0;jenkins-hbase4:36655] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.16 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/555c62a7c64648388d9dcc698896d3de 2023-05-29 12:58:40,985 DEBUG [M:0;jenkins-hbase4:36655] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/555c62a7c64648388d9dcc698896d3de as hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/555c62a7c64648388d9dcc698896d3de 2023-05-29 12:58:40,990 INFO [M:0;jenkins-hbase4:36655] regionserver.HStore(1080): Added hdfs://localhost:33567/user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/555c62a7c64648388d9dcc698896d3de, entries=11, sequenceid=92, filesize=7.0 K 2023-05-29 12:58:40,991 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegion(2948): Finished flush of dataSize ~38.16 KB/39075, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=92, compaction requested=false 2023-05-29 12:58:40,992 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:40,992 DEBUG [M:0;jenkins-hbase4:36655] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:58:40,993 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/d5a046b9-5ae1-cc52-a89a-3c934bcd918b/MasterData/WALs/jenkins-hbase4.apache.org,36655,1685365066465 2023-05-29 12:58:40,997 INFO [M:0;jenkins-hbase4:36655] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 12:58:40,997 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:58:40,997 INFO [M:0;jenkins-hbase4:36655] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36655 2023-05-29 12:58:41,000 DEBUG [M:0;jenkins-hbase4:36655] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36655,1685365066465 already deleted, retry=false 2023-05-29 12:58:41,145 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:58:41,145 INFO [M:0;jenkins-hbase4:36655] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36655,1685365066465; zookeeper connection closed. 2023-05-29 12:58:41,145 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): master:36655-0x10077048dd10000, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:58:41,245 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:58:41,245 INFO [RS:0;jenkins-hbase4:38253] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38253,1685365066507; zookeeper connection closed. 2023-05-29 12:58:41,245 DEBUG [Listener at localhost/43589-EventThread] zookeeper.ZKWatcher(600): regionserver:38253-0x10077048dd10001, quorum=127.0.0.1:59149, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:58:41,246 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@58201a48] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@58201a48 2023-05-29 12:58:41,248 INFO [Listener at localhost/44735] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 12:58:41,249 WARN [Listener at localhost/44735] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:58:41,252 INFO [Listener at localhost/44735] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:58:41,356 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:58:41,356 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-503405845-172.31.14.131-1685365065911 (Datanode Uuid aa38821d-2331-44dd-9614-a83f2835d9d4) service to localhost/127.0.0.1:33567 2023-05-29 12:58:41,357 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data3/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:41,357 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data4/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:41,358 WARN [Listener at localhost/44735] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:58:41,362 INFO [Listener at localhost/44735] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:58:41,455 WARN [BP-503405845-172.31.14.131-1685365065911 heartbeating to localhost/127.0.0.1:33567] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-503405845-172.31.14.131-1685365065911 (Datanode Uuid 993d9d5c-8067-4fe9-affc-f469d0d75b43) service to localhost/127.0.0.1:33567 2023-05-29 12:58:41,456 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data1/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:41,456 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/cluster_a4723587-814a-d301-7090-13572863a8a9/dfs/data/data2/current/BP-503405845-172.31.14.131-1685365065911] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:58:41,475 INFO [Listener at localhost/44735] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:58:41,587 INFO [Listener at localhost/44735] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 12:58:41,599 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 12:58:41,608 INFO [Listener at localhost/44735] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=87 (was 78) - Thread LEAK? -, OpenFileDescriptor=461 (was 461), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=69 (was 45) - SystemLoadAverage LEAK? -, ProcessCount=167 (was 167), AvailableMemoryMB=3098 (was 3429) 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=87, OpenFileDescriptor=461, MaxFileDescriptor=60000, SystemLoadAverage=69, ProcessCount=167, AvailableMemoryMB=3098 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/hadoop.log.dir so I do NOT create it in target/test-data/03fac973-5717-a151-63df-49e74cc94184 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6cdfb97d-ca73-8d89-4cb9-5bfab859fa63/hadoop.tmp.dir so I do NOT create it in target/test-data/03fac973-5717-a151-63df-49e74cc94184 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474, deleteOnExit=true 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/test.cache.data in system properties and HBase conf 2023-05-29 12:58:41,617 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/hadoop.log.dir in system properties and HBase conf 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 12:58:41,618 DEBUG [Listener at localhost/44735] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:58:41,618 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/nfs.dump.dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/java.io.tmpdir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 12:58:41,619 INFO [Listener at localhost/44735] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 12:58:41,621 WARN [Listener at localhost/44735] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:58:41,623 WARN [Listener at localhost/44735] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:58:41,624 WARN [Listener at localhost/44735] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:58:41,665 WARN [Listener at localhost/44735] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:58:41,667 INFO [Listener at localhost/44735] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:58:41,671 INFO [Listener at localhost/44735] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/java.io.tmpdir/Jetty_localhost_41921_hdfs____qyf9dx/webapp 2023-05-29 12:58:41,761 INFO [Listener at localhost/44735] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41921 2023-05-29 12:58:41,762 WARN [Listener at localhost/44735] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:58:41,765 WARN [Listener at localhost/44735] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:58:41,765 WARN [Listener at localhost/44735] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:58:41,809 WARN [Listener at localhost/38947] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:58:41,819 WARN [Listener at localhost/38947] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:58:41,821 WARN [Listener at localhost/38947] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:58:41,822 INFO [Listener at localhost/38947] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:58:41,826 INFO [Listener at localhost/38947] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/java.io.tmpdir/Jetty_localhost_44301_datanode____.lpxmat/webapp 2023-05-29 12:58:41,916 INFO [Listener at localhost/38947] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44301 2023-05-29 12:58:41,922 WARN [Listener at localhost/39615] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:58:41,933 WARN [Listener at localhost/39615] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:58:41,935 WARN [Listener at localhost/39615] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:58:41,936 INFO [Listener at localhost/39615] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:58:41,938 INFO [Listener at localhost/39615] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/java.io.tmpdir/Jetty_localhost_44883_datanode____qcif4q/webapp 2023-05-29 12:58:42,006 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbc18ca83bdf997db: Processing first storage report for DS-78539aa6-d56f-45b5-9094-5631d79a77a8 from datanode a81f7297-bd3c-4abf-a833-12d99bd0595a 2023-05-29 12:58:42,006 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbc18ca83bdf997db: from storage DS-78539aa6-d56f-45b5-9094-5631d79a77a8 node DatanodeRegistration(127.0.0.1:37213, datanodeUuid=a81f7297-bd3c-4abf-a833-12d99bd0595a, infoPort=35133, infoSecurePort=0, ipcPort=39615, storageInfo=lv=-57;cid=testClusterID;nsid=1529264670;c=1685365121626), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:42,006 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbc18ca83bdf997db: Processing first storage report for DS-39f5f227-a6cf-4f3b-a8b7-a14388a28949 from datanode a81f7297-bd3c-4abf-a833-12d99bd0595a 2023-05-29 12:58:42,007 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbc18ca83bdf997db: from storage DS-39f5f227-a6cf-4f3b-a8b7-a14388a28949 node DatanodeRegistration(127.0.0.1:37213, datanodeUuid=a81f7297-bd3c-4abf-a833-12d99bd0595a, infoPort=35133, infoSecurePort=0, ipcPort=39615, storageInfo=lv=-57;cid=testClusterID;nsid=1529264670;c=1685365121626), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:42,031 INFO [Listener at localhost/39615] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44883 2023-05-29 12:58:42,037 WARN [Listener at localhost/38477] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:58:42,123 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x13d6563c18b869c5: Processing first storage report for DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8 from datanode 6b7bdf10-1620-4334-a697-45e893beb643 2023-05-29 12:58:42,123 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x13d6563c18b869c5: from storage DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8 node DatanodeRegistration(127.0.0.1:40853, datanodeUuid=6b7bdf10-1620-4334-a697-45e893beb643, infoPort=36641, infoSecurePort=0, ipcPort=38477, storageInfo=lv=-57;cid=testClusterID;nsid=1529264670;c=1685365121626), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:42,123 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x13d6563c18b869c5: Processing first storage report for DS-cf4a4697-e7a2-4964-b99d-3c3988e615ee from datanode 6b7bdf10-1620-4334-a697-45e893beb643 2023-05-29 12:58:42,123 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x13d6563c18b869c5: from storage DS-cf4a4697-e7a2-4964-b99d-3c3988e615ee node DatanodeRegistration(127.0.0.1:40853, datanodeUuid=6b7bdf10-1620-4334-a697-45e893beb643, infoPort=36641, infoSecurePort=0, ipcPort=38477, storageInfo=lv=-57;cid=testClusterID;nsid=1529264670;c=1685365121626), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:58:42,143 DEBUG [Listener at localhost/38477] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184 2023-05-29 12:58:42,145 INFO [Listener at localhost/38477] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/zookeeper_0, clientPort=62871, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 12:58:42,146 INFO [Listener at localhost/38477] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62871 2023-05-29 12:58:42,146 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,147 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,162 INFO [Listener at localhost/38477] util.FSUtils(471): Created version file at hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b with version=8 2023-05-29 12:58:42,162 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase-staging 2023-05-29 12:58:42,163 INFO [Listener at localhost/38477] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:58:42,163 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:58:42,164 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:58:42,164 INFO [Listener at localhost/38477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:58:42,164 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:58:42,164 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:58:42,164 INFO [Listener at localhost/38477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:58:42,165 INFO [Listener at localhost/38477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43689 2023-05-29 12:58:42,165 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,166 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,167 INFO [Listener at localhost/38477] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43689 connecting to ZooKeeper ensemble=127.0.0.1:62871 2023-05-29 12:58:42,173 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:436890x0, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:58:42,174 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43689-0x100770567640000 connected 2023-05-29 12:58:42,187 DEBUG [Listener at localhost/38477] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:58:42,188 DEBUG [Listener at localhost/38477] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:58:42,188 DEBUG [Listener at localhost/38477] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:58:42,188 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43689 2023-05-29 12:58:42,188 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43689 2023-05-29 12:58:42,189 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43689 2023-05-29 12:58:42,189 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43689 2023-05-29 12:58:42,189 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43689 2023-05-29 12:58:42,189 INFO [Listener at localhost/38477] master.HMaster(444): hbase.rootdir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b, hbase.cluster.distributed=false 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:58:42,202 INFO [Listener at localhost/38477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:58:42,203 INFO [Listener at localhost/38477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37819 2023-05-29 12:58:42,204 INFO [Listener at localhost/38477] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 12:58:42,206 DEBUG [Listener at localhost/38477] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 12:58:42,206 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,207 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,208 INFO [Listener at localhost/38477] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37819 connecting to ZooKeeper ensemble=127.0.0.1:62871 2023-05-29 12:58:42,211 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:378190x0, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:58:42,212 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37819-0x100770567640001 connected 2023-05-29 12:58:42,212 DEBUG [Listener at localhost/38477] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:58:42,212 DEBUG [Listener at localhost/38477] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:58:42,213 DEBUG [Listener at localhost/38477] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:58:42,213 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37819 2023-05-29 12:58:42,213 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37819 2023-05-29 12:58:42,214 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37819 2023-05-29 12:58:42,214 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37819 2023-05-29 12:58:42,214 DEBUG [Listener at localhost/38477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37819 2023-05-29 12:58:42,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,217 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:58:42,217 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,219 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:58:42,219 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:58:42,219 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:58:42,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43689,1685365122163 from backup master directory 2023-05-29 12:58:42,221 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:58:42,222 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,222 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:58:42,222 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:58:42,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/hbase.id with ID: 7acad415-5405-460e-9a92-a0827956e977 2023-05-29 12:58:42,244 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:42,246 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0288de11 to 127.0.0.1:62871 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:58:42,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f5fa4cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:58:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:58:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 12:58:42,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:58:42,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store-tmp 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:58:42,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:42,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:58:42,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/WALs/jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43689%2C1685365122163, suffix=, logDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/WALs/jenkins-hbase4.apache.org,43689,1685365122163, archiveDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/oldWALs, maxLogs=10 2023-05-29 12:58:42,276 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/WALs/jenkins-hbase4.apache.org,43689,1685365122163/jenkins-hbase4.apache.org%2C43689%2C1685365122163.1685365122271 2023-05-29 12:58:42,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40853,DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8,DISK], DatanodeInfoWithStorage[127.0.0.1:37213,DS-78539aa6-d56f-45b5-9094-5631d79a77a8,DISK]] 2023-05-29 12:58:42,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:58:42,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:42,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:58:42,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:58:42,279 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:58:42,280 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 12:58:42,281 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 12:58:42,281 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,282 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:58:42,282 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:58:42,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:58:42,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:58:42,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=721918, jitterRate=-0.08203484117984772}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:58:42,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:58:42,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 12:58:42,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 12:58:42,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 12:58:42,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 12:58:42,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 12:58:42,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 12:58:42,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 12:58:42,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 12:58:42,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 12:58:42,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 12:58:42,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 12:58:42,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 12:58:42,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 12:58:42,305 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 12:58:42,308 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,308 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 12:58:42,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 12:58:42,310 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 12:58:42,311 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:58:42,311 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:58:42,311 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,312 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43689,1685365122163, sessionid=0x100770567640000, setting cluster-up flag (Was=false) 2023-05-29 12:58:42,316 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 12:58:42,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,326 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 12:58:42,331 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:42,332 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.hbase-snapshot/.tmp 2023-05-29 12:58:42,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 12:58:42,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:58:42,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:58:42,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:58:42,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:58:42,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 12:58:42,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:58:42,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685365152340 2023-05-29 12:58:42,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 12:58:42,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 12:58:42,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 12:58:42,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 12:58:42,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 12:58:42,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 12:58:42,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,341 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:58:42,341 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 12:58:42,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 12:58:42,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 12:58:42,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 12:58:42,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 12:58:42,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 12:58:42,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365122342,5,FailOnTimeoutGroup] 2023-05-29 12:58:42,343 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:58:42,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365122343,5,FailOnTimeoutGroup] 2023-05-29 12:58:42,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 12:58:42,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,362 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:58:42,363 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:58:42,363 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b 2023-05-29 12:58:42,370 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:42,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:58:42,372 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/info 2023-05-29 12:58:42,372 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:58:42,373 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,373 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:58:42,374 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:58:42,375 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:58:42,375 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,375 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:58:42,376 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/table 2023-05-29 12:58:42,376 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:58:42,377 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,378 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740 2023-05-29 12:58:42,378 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740 2023-05-29 12:58:42,380 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:58:42,381 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:58:42,383 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:58:42,384 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=853690, jitterRate=0.085523322224617}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:58:42,384 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:58:42,384 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:58:42,384 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:58:42,384 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:58:42,384 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:58:42,384 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:58:42,384 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:58:42,384 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:58:42,385 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:58:42,385 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 12:58:42,385 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 12:58:42,387 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 12:58:42,388 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 12:58:42,416 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(951): ClusterId : 7acad415-5405-460e-9a92-a0827956e977 2023-05-29 12:58:42,416 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 12:58:42,419 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 12:58:42,419 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 12:58:42,421 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 12:58:42,422 DEBUG [RS:0;jenkins-hbase4:37819] zookeeper.ReadOnlyZKClient(139): Connect 0x66de51b5 to 127.0.0.1:62871 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:58:42,425 DEBUG [RS:0;jenkins-hbase4:37819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6819a31e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:58:42,425 DEBUG [RS:0;jenkins-hbase4:37819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c263ac5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:58:42,434 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37819 2023-05-29 12:58:42,434 INFO [RS:0;jenkins-hbase4:37819] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 12:58:42,434 INFO [RS:0;jenkins-hbase4:37819] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 12:58:42,434 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 12:58:42,435 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,43689,1685365122163 with isa=jenkins-hbase4.apache.org/172.31.14.131:37819, startcode=1685365122201 2023-05-29 12:58:42,435 DEBUG [RS:0;jenkins-hbase4:37819] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 12:58:42,439 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43305, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 12:58:42,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,440 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b 2023-05-29 12:58:42,440 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38947 2023-05-29 12:58:42,440 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 12:58:42,441 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:58:42,442 DEBUG [RS:0;jenkins-hbase4:37819] zookeeper.ZKUtil(162): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,442 WARN [RS:0;jenkins-hbase4:37819] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:58:42,442 INFO [RS:0;jenkins-hbase4:37819] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:58:42,442 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1946): logDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,442 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37819,1685365122201] 2023-05-29 12:58:42,446 DEBUG [RS:0;jenkins-hbase4:37819] zookeeper.ZKUtil(162): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,447 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 12:58:42,447 INFO [RS:0;jenkins-hbase4:37819] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 12:58:42,448 INFO [RS:0;jenkins-hbase4:37819] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 12:58:42,448 INFO [RS:0;jenkins-hbase4:37819] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:58:42,448 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,448 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 12:58:42,449 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,450 DEBUG [RS:0;jenkins-hbase4:37819] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:58:42,451 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,451 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,452 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,465 INFO [RS:0;jenkins-hbase4:37819] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 12:58:42,465 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37819,1685365122201-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,476 INFO [RS:0;jenkins-hbase4:37819] regionserver.Replication(203): jenkins-hbase4.apache.org,37819,1685365122201 started 2023-05-29 12:58:42,476 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37819,1685365122201, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37819, sessionid=0x100770567640001 2023-05-29 12:58:42,476 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 12:58:42,476 DEBUG [RS:0;jenkins-hbase4:37819] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,476 DEBUG [RS:0;jenkins-hbase4:37819] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37819,1685365122201' 2023-05-29 12:58:42,476 DEBUG [RS:0;jenkins-hbase4:37819] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:58:42,476 DEBUG [RS:0;jenkins-hbase4:37819] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37819,1685365122201' 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 12:58:42,477 DEBUG [RS:0;jenkins-hbase4:37819] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 12:58:42,477 INFO [RS:0;jenkins-hbase4:37819] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 12:58:42,477 INFO [RS:0;jenkins-hbase4:37819] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 12:58:42,538 DEBUG [jenkins-hbase4:43689] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 12:58:42,539 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37819,1685365122201, state=OPENING 2023-05-29 12:58:42,541 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 12:58:42,542 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:42,542 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37819,1685365122201}] 2023-05-29 12:58:42,542 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:58:42,579 INFO [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37819%2C1685365122201, suffix=, logDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201, archiveDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/oldWALs, maxLogs=32 2023-05-29 12:58:42,587 INFO [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365122580 2023-05-29 12:58:42,587 DEBUG [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37213,DS-78539aa6-d56f-45b5-9094-5631d79a77a8,DISK], DatanodeInfoWithStorage[127.0.0.1:40853,DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8,DISK]] 2023-05-29 12:58:42,697 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,697 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 12:58:42,699 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32970, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 12:58:42,702 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 12:58:42,703 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:58:42,704 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37819%2C1685365122201.meta, suffix=.meta, logDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201, archiveDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/oldWALs, maxLogs=32 2023-05-29 12:58:42,711 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.meta.1685365122705.meta 2023-05-29 12:58:42,711 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37213,DS-78539aa6-d56f-45b5-9094-5631d79a77a8,DISK], DatanodeInfoWithStorage[127.0.0.1:40853,DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8,DISK]] 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 12:58:42,712 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 12:58:42,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 12:58:42,713 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:58:42,714 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/info 2023-05-29 12:58:42,714 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/info 2023-05-29 12:58:42,715 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:58:42,715 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,715 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:58:42,716 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:58:42,716 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:58:42,716 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:58:42,717 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,717 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:58:42,718 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/table 2023-05-29 12:58:42,718 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/table 2023-05-29 12:58:42,718 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:58:42,719 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:42,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740 2023-05-29 12:58:42,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740 2023-05-29 12:58:42,722 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:58:42,723 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:58:42,724 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=821412, jitterRate=0.04448007047176361}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:58:42,724 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:58:42,726 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685365122696 2023-05-29 12:58:42,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 12:58:42,730 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 12:58:42,731 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37819,1685365122201, state=OPEN 2023-05-29 12:58:42,732 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 12:58:42,733 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:58:42,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 12:58:42,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37819,1685365122201 in 190 msec 2023-05-29 12:58:42,737 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 12:58:42,737 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 350 msec 2023-05-29 12:58:42,739 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 404 msec 2023-05-29 12:58:42,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685365122739, completionTime=-1 2023-05-29 12:58:42,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 12:58:42,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 12:58:42,742 DEBUG [hconnection-0x4bd0e661-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:58:42,744 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:58:42,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 12:58:42,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685365182745 2023-05-29 12:58:42,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685365242745 2023-05-29 12:58:42,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43689,1685365122163-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43689,1685365122163-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43689,1685365122163-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43689, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 12:58:42,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:58:42,753 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 12:58:42,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 12:58:42,755 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:58:42,756 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:58:42,760 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/hbase/namespace/b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:42,761 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/hbase/namespace/b87290619f1c8530cf44d6a257730795 empty. 2023-05-29 12:58:42,762 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/hbase/namespace/b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:42,762 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 12:58:42,776 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 12:58:42,777 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b87290619f1c8530cf44d6a257730795, NAME => 'hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp 2023-05-29 12:58:42,787 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:42,787 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b87290619f1c8530cf44d6a257730795, disabling compactions & flushes 2023-05-29 12:58:42,787 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:42,787 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:42,787 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. after waiting 0 ms 2023-05-29 12:58:42,787 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:42,787 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:42,787 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b87290619f1c8530cf44d6a257730795: 2023-05-29 12:58:42,790 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:58:42,791 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365122791"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365122791"}]},"ts":"1685365122791"} 2023-05-29 12:58:42,794 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:58:42,795 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:58:42,795 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365122795"}]},"ts":"1685365122795"} 2023-05-29 12:58:42,796 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 12:58:42,803 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b87290619f1c8530cf44d6a257730795, ASSIGN}] 2023-05-29 12:58:42,805 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b87290619f1c8530cf44d6a257730795, ASSIGN 2023-05-29 12:58:42,806 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b87290619f1c8530cf44d6a257730795, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37819,1685365122201; forceNewPlan=false, retain=false 2023-05-29 12:58:42,958 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b87290619f1c8530cf44d6a257730795, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:42,958 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365122957"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365122957"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365122957"}]},"ts":"1685365122957"} 2023-05-29 12:58:42,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure b87290619f1c8530cf44d6a257730795, server=jenkins-hbase4.apache.org,37819,1685365122201}] 2023-05-29 12:58:43,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:43,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b87290619f1c8530cf44d6a257730795, NAME => 'hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:58:43,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:43,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,118 INFO [StoreOpener-b87290619f1c8530cf44d6a257730795-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,120 DEBUG [StoreOpener-b87290619f1c8530cf44d6a257730795-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/info 2023-05-29 12:58:43,120 DEBUG [StoreOpener-b87290619f1c8530cf44d6a257730795-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/info 2023-05-29 12:58:43,120 INFO [StoreOpener-b87290619f1c8530cf44d6a257730795-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b87290619f1c8530cf44d6a257730795 columnFamilyName info 2023-05-29 12:58:43,121 INFO [StoreOpener-b87290619f1c8530cf44d6a257730795-1] regionserver.HStore(310): Store=b87290619f1c8530cf44d6a257730795/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:43,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,123 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b87290619f1c8530cf44d6a257730795 2023-05-29 12:58:43,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:58:43,129 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b87290619f1c8530cf44d6a257730795; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=737175, jitterRate=-0.0626339465379715}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:58:43,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b87290619f1c8530cf44d6a257730795: 2023-05-29 12:58:43,131 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795., pid=6, masterSystemTime=1685365123113 2023-05-29 12:58:43,133 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:43,133 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:43,134 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b87290619f1c8530cf44d6a257730795, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:43,134 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365123134"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365123134"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365123134"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365123134"}]},"ts":"1685365123134"} 2023-05-29 12:58:43,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 12:58:43,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure b87290619f1c8530cf44d6a257730795, server=jenkins-hbase4.apache.org,37819,1685365122201 in 176 msec 2023-05-29 12:58:43,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 12:58:43,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b87290619f1c8530cf44d6a257730795, ASSIGN in 335 msec 2023-05-29 12:58:43,141 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:58:43,142 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365123142"}]},"ts":"1685365123142"} 2023-05-29 12:58:43,143 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 12:58:43,146 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:58:43,147 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 394 msec 2023-05-29 12:58:43,154 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 12:58:43,156 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:58:43,156 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:43,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 12:58:43,171 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:58:43,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-05-29 12:58:43,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 12:58:43,189 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:58:43,195 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-29 12:58:43,206 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 12:58:43,209 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 12:58:43,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.987sec 2023-05-29 12:58:43,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 12:58:43,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 12:58:43,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 12:58:43,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43689,1685365122163-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 12:58:43,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43689,1685365122163-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 12:58:43,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 12:58:43,216 DEBUG [Listener at localhost/38477] zookeeper.ReadOnlyZKClient(139): Connect 0x286f9420 to 127.0.0.1:62871 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:58:43,220 DEBUG [Listener at localhost/38477] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53d8a823, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:58:43,221 DEBUG [hconnection-0x44571d57-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:58:43,223 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:58:43,224 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:58:43,225 INFO [Listener at localhost/38477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:58:43,227 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 12:58:43,227 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:58:43,228 INFO [Listener at localhost/38477] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 12:58:43,229 DEBUG [Listener at localhost/38477] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 12:58:43,232 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 12:58:43,233 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 12:58:43,233 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 12:58:43,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:58:43,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:58:43,236 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:58:43,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-29 12:58:43,237 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:58:43,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:58:43,239 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,239 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7 empty. 2023-05-29 12:58:43,240 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,240 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-29 12:58:43,257 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-29 12:58:43,258 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c6a3419d81c021718c7d354bf07917d7, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/.tmp 2023-05-29 12:58:43,269 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:43,269 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing c6a3419d81c021718c7d354bf07917d7, disabling compactions & flushes 2023-05-29 12:58:43,269 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,269 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,269 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. after waiting 0 ms 2023-05-29 12:58:43,269 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,269 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,269 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:58:43,272 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:58:43,272 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685365123272"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365123272"}]},"ts":"1685365123272"} 2023-05-29 12:58:43,274 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:58:43,275 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:58:43,276 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365123275"}]},"ts":"1685365123275"} 2023-05-29 12:58:43,277 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-29 12:58:43,282 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=c6a3419d81c021718c7d354bf07917d7, ASSIGN}] 2023-05-29 12:58:43,284 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=c6a3419d81c021718c7d354bf07917d7, ASSIGN 2023-05-29 12:58:43,284 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=c6a3419d81c021718c7d354bf07917d7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37819,1685365122201; forceNewPlan=false, retain=false 2023-05-29 12:58:43,435 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c6a3419d81c021718c7d354bf07917d7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:43,436 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685365123435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365123435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365123435"}]},"ts":"1685365123435"} 2023-05-29 12:58:43,438 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c6a3419d81c021718c7d354bf07917d7, server=jenkins-hbase4.apache.org,37819,1685365122201}] 2023-05-29 12:58:43,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c6a3419d81c021718c7d354bf07917d7, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:58:43,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:58:43,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,596 INFO [StoreOpener-c6a3419d81c021718c7d354bf07917d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,597 DEBUG [StoreOpener-c6a3419d81c021718c7d354bf07917d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info 2023-05-29 12:58:43,597 DEBUG [StoreOpener-c6a3419d81c021718c7d354bf07917d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info 2023-05-29 12:58:43,598 INFO [StoreOpener-c6a3419d81c021718c7d354bf07917d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c6a3419d81c021718c7d354bf07917d7 columnFamilyName info 2023-05-29 12:58:43,598 INFO [StoreOpener-c6a3419d81c021718c7d354bf07917d7-1] regionserver.HStore(310): Store=c6a3419d81c021718c7d354bf07917d7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:58:43,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:58:43,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:58:43,604 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c6a3419d81c021718c7d354bf07917d7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835555, jitterRate=0.06246389448642731}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:58:43,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:58:43,605 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7., pid=11, masterSystemTime=1685365123590 2023-05-29 12:58:43,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:43,608 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c6a3419d81c021718c7d354bf07917d7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:43,608 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685365123608"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365123608"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365123608"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365123608"}]},"ts":"1685365123608"} 2023-05-29 12:58:43,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 12:58:43,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c6a3419d81c021718c7d354bf07917d7, server=jenkins-hbase4.apache.org,37819,1685365122201 in 172 msec 2023-05-29 12:58:43,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 12:58:43,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=c6a3419d81c021718c7d354bf07917d7, ASSIGN in 330 msec 2023-05-29 12:58:43,615 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:58:43,615 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365123615"}]},"ts":"1685365123615"} 2023-05-29 12:58:43,616 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-29 12:58:43,619 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:58:43,620 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 385 msec 2023-05-29 12:58:46,310 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 12:58:48,447 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 12:58:48,447 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 12:58:48,448 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:58:53,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:58:53,239 INFO [Listener at localhost/38477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-29 12:58:53,242 DEBUG [Listener at localhost/38477] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:58:53,242 DEBUG [Listener at localhost/38477] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:58:53,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 12:58:53,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-29 12:58:53,264 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-29 12:58:53,264 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:58:53,264 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-29 12:58:53,264 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-29 12:58:53,265 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,265 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 12:58:53,266 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:58:53,266 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,266 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:58:53,266 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:58:53,266 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,267 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 12:58:53,267 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 12:58:53,267 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,267 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 12:58:53,267 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 12:58:53,268 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-29 12:58:53,270 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-29 12:58:53,270 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-29 12:58:53,270 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:58:53,271 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-29 12:58:53,271 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 12:58:53,271 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 12:58:53,271 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:53,272 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. started... 2023-05-29 12:58:53,272 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing b87290619f1c8530cf44d6a257730795 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 12:58:53,285 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/.tmp/info/cf7585304181481dbf95c3bc018e9001 2023-05-29 12:58:53,295 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/.tmp/info/cf7585304181481dbf95c3bc018e9001 as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/info/cf7585304181481dbf95c3bc018e9001 2023-05-29 12:58:53,300 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/info/cf7585304181481dbf95c3bc018e9001, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 12:58:53,301 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for b87290619f1c8530cf44d6a257730795 in 29ms, sequenceid=6, compaction requested=false 2023-05-29 12:58:53,301 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for b87290619f1c8530cf44d6a257730795: 2023-05-29 12:58:53,301 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:58:53,301 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 12:58:53,302 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 12:58:53,302 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,302 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-29 12:58:53,302 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-29 12:58:53,303 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,303 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:58:53,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:58:53,304 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,304 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 12:58:53,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:58:53,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:58:53,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 12:58:53,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:58:53,305 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-29 12:58:53,306 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5ca7edb1[Count = 0] remaining members to acquire global barrier 2023-05-29 12:58:53,306 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-29 12:58:53,306 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,307 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,307 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,307 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-29 12:58:53,307 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-29 12:58:53,307 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,307 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,37819,1685365122201' in zk 2023-05-29 12:58:53,307 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 12:58:53,310 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-29 12:58:53,310 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,310 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:58:53,310 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,311 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:58:53,311 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:58:53,310 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-29 12:58:53,311 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:58:53,311 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:58:53,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 12:58:53,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:58:53,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 12:58:53,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,313 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,37819,1685365122201': 2023-05-29 12:58:53,313 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-29 12:58:53,313 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-29 12:58:53,313 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 12:58:53,313 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 12:58:53,313 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-29 12:58:53,313 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 12:58:53,315 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,315 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,315 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,315 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,316 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:58:53,316 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:58:53,315 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:58:53,315 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,316 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:58:53,316 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:58:53,316 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:58:53,316 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,316 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 12:58:53,316 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,317 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:58:53,317 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 12:58:53,317 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,317 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,317 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:58:53,318 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 12:58:53,318 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,324 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,324 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:58:53,324 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 12:58:53,324 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:58:53,324 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 12:58:53,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-29 12:58:53,324 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:58:53,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 12:58:53,324 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:58:53,324 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:58:53,325 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,324 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:58:53,325 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 12:58:53,325 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 12:58:53,326 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:58:53,326 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:58:53,327 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-29 12:58:53,327 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 12:59:03,327 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 12:59:03,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 12:59:03,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 12:59:03,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,346 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:03,346 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:03,347 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 12:59:03,347 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 12:59:03,347 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,347 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,350 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,350 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:03,350 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:03,350 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:03,350 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,350 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 12:59:03,350 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,351 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 12:59:03,351 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,351 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,351 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,351 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 12:59:03,351 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:03,352 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 12:59:03,352 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 12:59:03,352 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 12:59:03,352 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:03,352 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. started... 2023-05-29 12:59:03,352 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing c6a3419d81c021718c7d354bf07917d7 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 12:59:03,372 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/059ead797201490b8a31eaadf9a1270a 2023-05-29 12:59:03,378 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/059ead797201490b8a31eaadf9a1270a as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/059ead797201490b8a31eaadf9a1270a 2023-05-29 12:59:03,386 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/059ead797201490b8a31eaadf9a1270a, entries=1, sequenceid=5, filesize=5.8 K 2023-05-29 12:59:03,387 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for c6a3419d81c021718c7d354bf07917d7 in 35ms, sequenceid=5, compaction requested=false 2023-05-29 12:59:03,388 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:59:03,388 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:03,388 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 12:59:03,388 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 12:59:03,388 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,388 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 12:59:03,388 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 12:59:03,390 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,390 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:03,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:03,390 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,390 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 12:59:03,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:03,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:03,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:03,392 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 12:59:03,392 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@589862fd[Count = 0] remaining members to acquire global barrier 2023-05-29 12:59:03,392 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 12:59:03,392 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,393 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,393 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,393 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,393 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 12:59:03,393 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 12:59:03,393 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,37819,1685365122201' in zk 2023-05-29 12:59:03,393 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,393 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 12:59:03,395 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 12:59:03,395 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,395 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:03,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:03,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:03,395 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 12:59:03,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:03,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:03,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:03,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,398 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,37819,1685365122201': 2023-05-29 12:59:03,398 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 12:59:03,398 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 12:59:03,398 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 12:59:03,398 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 12:59:03,398 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,398 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 12:59:03,403 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,403 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:03,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:03,403 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,403 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:03,404 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:03,404 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:03,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:03,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:03,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,405 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:03,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,409 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,409 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:03,409 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,409 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:03,409 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:03,409 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,409 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:03,409 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-29 12:59:03,409 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:03,409 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:03,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:03,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 12:59:03,410 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,410 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:03,410 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:03,410 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,410 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:03,410 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 12:59:03,410 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 12:59:13,410 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 12:59:13,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 12:59:13,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 12:59:13,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-29 12:59:13,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,421 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:13,421 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:13,421 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 12:59:13,421 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 12:59:13,422 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,422 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,427 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,427 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:13,428 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:13,428 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:13,428 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,428 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 12:59:13,428 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,428 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 12:59:13,429 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,429 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,429 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-29 12:59:13,429 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,429 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 12:59:13,429 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:13,429 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 12:59:13,429 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 12:59:13,429 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 12:59:13,430 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:13,430 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. started... 2023-05-29 12:59:13,430 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing c6a3419d81c021718c7d354bf07917d7 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 12:59:13,439 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/cb00f6987f2348e0b7d4c430ac450a76 2023-05-29 12:59:13,445 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/cb00f6987f2348e0b7d4c430ac450a76 as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/cb00f6987f2348e0b7d4c430ac450a76 2023-05-29 12:59:13,452 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/cb00f6987f2348e0b7d4c430ac450a76, entries=1, sequenceid=9, filesize=5.8 K 2023-05-29 12:59:13,453 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for c6a3419d81c021718c7d354bf07917d7 in 23ms, sequenceid=9, compaction requested=false 2023-05-29 12:59:13,453 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:59:13,453 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:13,453 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 12:59:13,453 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 12:59:13,453 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,453 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 12:59:13,453 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 12:59:13,455 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,455 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,455 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,455 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:13,455 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:13,455 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,455 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 12:59:13,456 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:13,456 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:13,456 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,456 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,457 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:13,457 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 12:59:13,457 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@1d9ae78a[Count = 0] remaining members to acquire global barrier 2023-05-29 12:59:13,457 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 12:59:13,457 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,458 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,458 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,458 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,458 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 12:59:13,458 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,458 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 12:59:13,458 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 12:59:13,459 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,37819,1685365122201' in zk 2023-05-29 12:59:13,461 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,461 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 12:59:13,461 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,461 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:13,461 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:13,461 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:13,461 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 12:59:13,461 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:13,462 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:13,462 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,462 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,462 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:13,463 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,463 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,463 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,37819,1685365122201': 2023-05-29 12:59:13,463 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 12:59:13,463 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 12:59:13,463 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 12:59:13,463 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 12:59:13,463 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,463 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 12:59:13,466 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,466 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,466 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,466 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,466 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:13,466 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:13,466 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:13,466 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,466 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:13,466 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:13,466 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:13,466 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,467 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,467 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,467 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:13,467 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,468 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,468 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,468 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:13,468 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,469 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,471 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:13,471 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:13,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:13,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 12:59:13,471 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:13,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:13,471 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 12:59:13,472 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:13,472 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:13,472 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 12:59:23,472 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 12:59:23,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 12:59:23,486 INFO [Listener at localhost/38477] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365122580 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365163476 2023-05-29 12:59:23,486 DEBUG [Listener at localhost/38477] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37213,DS-78539aa6-d56f-45b5-9094-5631d79a77a8,DISK], DatanodeInfoWithStorage[127.0.0.1:40853,DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8,DISK]] 2023-05-29 12:59:23,487 DEBUG [Listener at localhost/38477] wal.AbstractFSWAL(716): hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365122580 is not closed yet, will try archiving it next time 2023-05-29 12:59:23,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 12:59:23,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-29 12:59:23,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,505 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:23,505 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:23,506 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 12:59:23,506 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 12:59:23,507 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,507 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,508 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:23,508 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,508 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:23,508 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:23,509 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,509 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 12:59:23,509 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,510 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 12:59:23,510 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,510 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,510 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-29 12:59:23,510 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,510 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 12:59:23,510 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:23,511 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 12:59:23,511 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 12:59:23,511 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 12:59:23,511 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:23,511 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. started... 2023-05-29 12:59:23,511 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing c6a3419d81c021718c7d354bf07917d7 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 12:59:23,527 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/02b037a0295e4cc791b8fb3b978f7a42 2023-05-29 12:59:23,534 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/02b037a0295e4cc791b8fb3b978f7a42 as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/02b037a0295e4cc791b8fb3b978f7a42 2023-05-29 12:59:23,540 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/02b037a0295e4cc791b8fb3b978f7a42, entries=1, sequenceid=13, filesize=5.8 K 2023-05-29 12:59:23,541 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for c6a3419d81c021718c7d354bf07917d7 in 30ms, sequenceid=13, compaction requested=true 2023-05-29 12:59:23,541 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:59:23,541 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:23,542 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 12:59:23,542 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 12:59:23,542 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,542 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 12:59:23,542 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 12:59:23,546 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,546 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,546 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,546 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:23,546 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:23,546 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,546 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 12:59:23,546 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:23,547 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:23,547 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,547 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,547 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:23,548 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 12:59:23,548 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4f64a9d5[Count = 0] remaining members to acquire global barrier 2023-05-29 12:59:23,548 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 12:59:23,548 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,549 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,549 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,549 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,549 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 12:59:23,549 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 12:59:23,549 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,549 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 12:59:23,549 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,37819,1685365122201' in zk 2023-05-29 12:59:23,551 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 12:59:23,551 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,551 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:23,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:23,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:23,551 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 12:59:23,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:23,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:23,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:23,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,37819,1685365122201': 2023-05-29 12:59:23,555 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 12:59:23,555 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 12:59:23,555 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 12:59:23,555 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 12:59:23,555 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,555 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 12:59:23,557 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,557 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,557 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,557 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,557 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:23,557 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:23,557 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:23,557 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:23,558 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:23,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:23,558 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:23,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,566 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:23,567 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,567 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,571 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:23,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:23,571 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:23,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:23,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 12:59:23,571 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,571 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:23,572 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,572 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:23,572 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 12:59:23,573 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 12:59:23,573 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:23,573 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:33,573 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 12:59:33,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 12:59:33,574 DEBUG [Listener at localhost/38477] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 12:59:33,579 DEBUG [Listener at localhost/38477] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 12:59:33,579 DEBUG [Listener at localhost/38477] regionserver.HStore(1912): c6a3419d81c021718c7d354bf07917d7/info is initiating minor compaction (all files) 2023-05-29 12:59:33,579 INFO [Listener at localhost/38477] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:59:33,579 INFO [Listener at localhost/38477] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:33,579 INFO [Listener at localhost/38477] regionserver.HRegion(2259): Starting compaction of c6a3419d81c021718c7d354bf07917d7/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:33,579 INFO [Listener at localhost/38477] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/059ead797201490b8a31eaadf9a1270a, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/cb00f6987f2348e0b7d4c430ac450a76, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/02b037a0295e4cc791b8fb3b978f7a42] into tmpdir=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp, totalSize=17.4 K 2023-05-29 12:59:33,580 DEBUG [Listener at localhost/38477] compactions.Compactor(207): Compacting 059ead797201490b8a31eaadf9a1270a, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685365143338 2023-05-29 12:59:33,580 DEBUG [Listener at localhost/38477] compactions.Compactor(207): Compacting cb00f6987f2348e0b7d4c430ac450a76, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685365153412 2023-05-29 12:59:33,581 DEBUG [Listener at localhost/38477] compactions.Compactor(207): Compacting 02b037a0295e4cc791b8fb3b978f7a42, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685365163474 2023-05-29 12:59:33,592 INFO [Listener at localhost/38477] throttle.PressureAwareThroughputController(145): c6a3419d81c021718c7d354bf07917d7#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 12:59:33,607 DEBUG [Listener at localhost/38477] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/71fb7a1671954326b902865da07db96f as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/71fb7a1671954326b902865da07db96f 2023-05-29 12:59:33,613 INFO [Listener at localhost/38477] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c6a3419d81c021718c7d354bf07917d7/info of c6a3419d81c021718c7d354bf07917d7 into 71fb7a1671954326b902865da07db96f(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 12:59:33,613 DEBUG [Listener at localhost/38477] regionserver.HRegion(2289): Compaction status journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:59:33,625 INFO [Listener at localhost/38477] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365163476 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365173615 2023-05-29 12:59:33,625 DEBUG [Listener at localhost/38477] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37213,DS-78539aa6-d56f-45b5-9094-5631d79a77a8,DISK], DatanodeInfoWithStorage[127.0.0.1:40853,DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8,DISK]] 2023-05-29 12:59:33,625 DEBUG [Listener at localhost/38477] wal.AbstractFSWAL(716): hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365163476 is not closed yet, will try archiving it next time 2023-05-29 12:59:33,625 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365122580 to hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/oldWALs/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365122580 2023-05-29 12:59:33,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 12:59:33,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-29 12:59:33,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,632 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:33,632 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:33,633 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 12:59:33,633 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 12:59:33,633 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,633 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,639 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:33,639 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,639 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:33,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:33,639 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,639 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 12:59:33,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 12:59:33,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,640 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,640 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-29 12:59:33,640 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,640 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 12:59:33,640 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 12:59:33,641 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 12:59:33,641 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 12:59:33,641 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 12:59:33,641 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:33,641 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. started... 2023-05-29 12:59:33,641 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing c6a3419d81c021718c7d354bf07917d7 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 12:59:33,689 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/4690184974a640f991cb360288b58547 2023-05-29 12:59:33,695 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/4690184974a640f991cb360288b58547 as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/4690184974a640f991cb360288b58547 2023-05-29 12:59:33,700 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/4690184974a640f991cb360288b58547, entries=1, sequenceid=18, filesize=5.8 K 2023-05-29 12:59:33,701 INFO [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for c6a3419d81c021718c7d354bf07917d7 in 60ms, sequenceid=18, compaction requested=false 2023-05-29 12:59:33,701 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:59:33,701 DEBUG [rs(jenkins-hbase4.apache.org,37819,1685365122201)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:33,701 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 12:59:33,701 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 12:59:33,701 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,701 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 12:59:33,701 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 12:59:33,703 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,703 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:33,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:33,704 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,704 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 12:59:33,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:33,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:33,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,705 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,705 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:33,705 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,37819,1685365122201' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 12:59:33,705 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@2ccd1fed[Count = 0] remaining members to acquire global barrier 2023-05-29 12:59:33,705 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 12:59:33,705 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,707 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,707 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,707 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,707 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 12:59:33,707 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 12:59:33,707 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,37819,1685365122201' in zk 2023-05-29 12:59:33,707 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,707 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 12:59:33,709 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 12:59:33,709 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,709 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:33,709 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,709 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:33,709 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:33,709 DEBUG [member: 'jenkins-hbase4.apache.org,37819,1685365122201' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 12:59:33,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:33,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:33,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:33,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,712 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,37819,1685365122201': 2023-05-29 12:59:33,712 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,37819,1685365122201' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 12:59:33,712 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 12:59:33,712 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 12:59:33,712 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 12:59:33,712 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,712 INFO [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 12:59:33,714 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,714 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,714 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,714 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 12:59:33,715 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,715 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 12:59:33,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 12:59:33,716 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 12:59:33,718 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 12:59:33,720 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 12:59:33,721 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:33,720 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,721 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,721 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 12:59:33,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 12:59:33,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 12:59:33,721 DEBUG [(jenkins-hbase4.apache.org,43689,1685365122163)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 12:59:33,721 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-29 12:59:33,722 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 12:59:33,722 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 12:59:33,722 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 12:59:33,722 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:43,722 DEBUG [Listener at localhost/38477] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 12:59:43,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43689] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 12:59:43,735 INFO [Listener at localhost/38477] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365173615 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365183726 2023-05-29 12:59:43,735 DEBUG [Listener at localhost/38477] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40853,DS-498378ef-d9ac-4a51-9b10-c9ed3e060ea8,DISK], DatanodeInfoWithStorage[127.0.0.1:37213,DS-78539aa6-d56f-45b5-9094-5631d79a77a8,DISK]] 2023-05-29 12:59:43,735 DEBUG [Listener at localhost/38477] wal.AbstractFSWAL(716): hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365173615 is not closed yet, will try archiving it next time 2023-05-29 12:59:43,735 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 12:59:43,735 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365163476 to hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/oldWALs/jenkins-hbase4.apache.org%2C37819%2C1685365122201.1685365163476 2023-05-29 12:59:43,735 INFO [Listener at localhost/38477] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 12:59:43,735 DEBUG [Listener at localhost/38477] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x286f9420 to 127.0.0.1:62871 2023-05-29 12:59:43,735 DEBUG [Listener at localhost/38477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:59:43,736 DEBUG [Listener at localhost/38477] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 12:59:43,736 DEBUG [Listener at localhost/38477] util.JVMClusterUtil(257): Found active master hash=943455870, stopped=false 2023-05-29 12:59:43,736 INFO [Listener at localhost/38477] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:59:43,740 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:59:43,740 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 12:59:43,740 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:43,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:59:43,740 INFO [Listener at localhost/38477] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 12:59:43,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:59:43,740 DEBUG [Listener at localhost/38477] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0288de11 to 127.0.0.1:62871 2023-05-29 12:59:43,740 DEBUG [Listener at localhost/38477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:59:43,741 INFO [Listener at localhost/38477] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37819,1685365122201' ***** 2023-05-29 12:59:43,741 INFO [Listener at localhost/38477] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 12:59:43,741 INFO [RS:0;jenkins-hbase4:37819] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 12:59:43,742 INFO [RS:0;jenkins-hbase4:37819] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 12:59:43,742 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 12:59:43,742 INFO [RS:0;jenkins-hbase4:37819] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 12:59:43,742 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(3303): Received CLOSE for b87290619f1c8530cf44d6a257730795 2023-05-29 12:59:43,743 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(3303): Received CLOSE for c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:59:43,743 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:43,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b87290619f1c8530cf44d6a257730795, disabling compactions & flushes 2023-05-29 12:59:43,743 DEBUG [RS:0;jenkins-hbase4:37819] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66de51b5 to 127.0.0.1:62871 2023-05-29 12:59:43,743 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:59:43,743 DEBUG [RS:0;jenkins-hbase4:37819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:59:43,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:59:43,743 INFO [RS:0;jenkins-hbase4:37819] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 12:59:43,743 INFO [RS:0;jenkins-hbase4:37819] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 12:59:43,743 INFO [RS:0;jenkins-hbase4:37819] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 12:59:43,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. after waiting 0 ms 2023-05-29 12:59:43,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:59:43,743 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 12:59:43,746 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 12:59:43,746 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1478): Online Regions={b87290619f1c8530cf44d6a257730795=hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795., c6a3419d81c021718c7d354bf07917d7=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7., 1588230740=hbase:meta,,1.1588230740} 2023-05-29 12:59:43,746 DEBUG [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1504): Waiting on 1588230740, b87290619f1c8530cf44d6a257730795, c6a3419d81c021718c7d354bf07917d7 2023-05-29 12:59:43,747 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:59:43,747 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:59:43,747 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:59:43,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:59:43,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:59:43,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-29 12:59:43,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/namespace/b87290619f1c8530cf44d6a257730795/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 12:59:43,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:59:43,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b87290619f1c8530cf44d6a257730795: 2023-05-29 12:59:43,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685365122752.b87290619f1c8530cf44d6a257730795. 2023-05-29 12:59:43,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c6a3419d81c021718c7d354bf07917d7, disabling compactions & flushes 2023-05-29 12:59:43,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:43,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:43,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. after waiting 0 ms 2023-05-29 12:59:43,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:43,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c6a3419d81c021718c7d354bf07917d7 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 12:59:43,760 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/.tmp/info/731f7a3b2725410eae848b87d492a4d9 2023-05-29 12:59:43,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/8d080d217bea4c9480b5ce383dbec78e 2023-05-29 12:59:43,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/.tmp/info/8d080d217bea4c9480b5ce383dbec78e as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/8d080d217bea4c9480b5ce383dbec78e 2023-05-29 12:59:43,780 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/.tmp/table/f7fbacb56b2e4463a49504fa3e3fc15e 2023-05-29 12:59:43,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/8d080d217bea4c9480b5ce383dbec78e, entries=1, sequenceid=22, filesize=5.8 K 2023-05-29 12:59:43,786 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/.tmp/info/731f7a3b2725410eae848b87d492a4d9 as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/info/731f7a3b2725410eae848b87d492a4d9 2023-05-29 12:59:43,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for c6a3419d81c021718c7d354bf07917d7 in 30ms, sequenceid=22, compaction requested=true 2023-05-29 12:59:43,789 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/059ead797201490b8a31eaadf9a1270a, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/cb00f6987f2348e0b7d4c430ac450a76, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/02b037a0295e4cc791b8fb3b978f7a42] to archive 2023-05-29 12:59:43,790 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 12:59:43,792 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/059ead797201490b8a31eaadf9a1270a to hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/059ead797201490b8a31eaadf9a1270a 2023-05-29 12:59:43,793 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/info/731f7a3b2725410eae848b87d492a4d9, entries=20, sequenceid=14, filesize=7.6 K 2023-05-29 12:59:43,794 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/cb00f6987f2348e0b7d4c430ac450a76 to hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/cb00f6987f2348e0b7d4c430ac450a76 2023-05-29 12:59:43,794 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/.tmp/table/f7fbacb56b2e4463a49504fa3e3fc15e as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/table/f7fbacb56b2e4463a49504fa3e3fc15e 2023-05-29 12:59:43,795 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/02b037a0295e4cc791b8fb3b978f7a42 to hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/info/02b037a0295e4cc791b8fb3b978f7a42 2023-05-29 12:59:43,804 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/table/f7fbacb56b2e4463a49504fa3e3fc15e, entries=4, sequenceid=14, filesize=4.9 K 2023-05-29 12:59:43,805 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 57ms, sequenceid=14, compaction requested=false 2023-05-29 12:59:43,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/c6a3419d81c021718c7d354bf07917d7/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-29 12:59:43,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:43,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c6a3419d81c021718c7d354bf07917d7: 2023-05-29 12:59:43,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685365123233.c6a3419d81c021718c7d354bf07917d7. 2023-05-29 12:59:43,811 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-29 12:59:43,812 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 12:59:43,812 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:59:43,812 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:59:43,812 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 12:59:43,946 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37819,1685365122201; all regions closed. 2023-05-29 12:59:43,946 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:43,953 DEBUG [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/oldWALs 2023-05-29 12:59:43,953 INFO [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37819%2C1685365122201.meta:.meta(num 1685365122705) 2023-05-29 12:59:43,953 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/WALs/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:43,958 DEBUG [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/oldWALs 2023-05-29 12:59:43,958 INFO [RS:0;jenkins-hbase4:37819] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37819%2C1685365122201:(num 1685365183726) 2023-05-29 12:59:43,958 DEBUG [RS:0;jenkins-hbase4:37819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:59:43,958 INFO [RS:0;jenkins-hbase4:37819] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:59:43,958 INFO [RS:0;jenkins-hbase4:37819] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 12:59:43,958 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:59:43,959 INFO [RS:0;jenkins-hbase4:37819] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37819 2023-05-29 12:59:43,962 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:59:43,962 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37819,1685365122201 2023-05-29 12:59:43,962 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:59:43,962 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37819,1685365122201] 2023-05-29 12:59:43,962 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37819,1685365122201; numProcessing=1 2023-05-29 12:59:43,965 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37819,1685365122201 already deleted, retry=false 2023-05-29 12:59:43,965 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37819,1685365122201 expired; onlineServers=0 2023-05-29 12:59:43,965 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,43689,1685365122163' ***** 2023-05-29 12:59:43,965 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 12:59:43,966 DEBUG [M:0;jenkins-hbase4:43689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c047da1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:59:43,966 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:59:43,966 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43689,1685365122163; all regions closed. 2023-05-29 12:59:43,966 DEBUG [M:0;jenkins-hbase4:43689] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 12:59:43,966 DEBUG [M:0;jenkins-hbase4:43689] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 12:59:43,966 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 12:59:43,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365122342] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365122342,5,FailOnTimeoutGroup] 2023-05-29 12:59:43,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365122343] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365122343,5,FailOnTimeoutGroup] 2023-05-29 12:59:43,966 DEBUG [M:0;jenkins-hbase4:43689] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 12:59:43,967 INFO [M:0;jenkins-hbase4:43689] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 12:59:43,967 INFO [M:0;jenkins-hbase4:43689] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 12:59:43,967 INFO [M:0;jenkins-hbase4:43689] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 12:59:43,968 DEBUG [M:0;jenkins-hbase4:43689] master.HMaster(1512): Stopping service threads 2023-05-29 12:59:43,968 INFO [M:0;jenkins-hbase4:43689] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 12:59:43,968 ERROR [M:0;jenkins-hbase4:43689] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 12:59:43,968 INFO [M:0;jenkins-hbase4:43689] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 12:59:43,968 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 12:59:43,968 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 12:59:43,968 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:43,968 DEBUG [M:0;jenkins-hbase4:43689] zookeeper.ZKUtil(398): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 12:59:43,968 WARN [M:0;jenkins-hbase4:43689] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 12:59:43,968 INFO [M:0;jenkins-hbase4:43689] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 12:59:43,969 INFO [M:0;jenkins-hbase4:43689] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 12:59:43,969 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:59:43,969 DEBUG [M:0;jenkins-hbase4:43689] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:59:43,969 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:43,969 DEBUG [M:0;jenkins-hbase4:43689] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:43,969 DEBUG [M:0;jenkins-hbase4:43689] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:59:43,969 DEBUG [M:0;jenkins-hbase4:43689] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:43,969 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-05-29 12:59:43,979 INFO [M:0;jenkins-hbase4:43689] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4f06f6ebcace430c9d0d7a2c920da943 2023-05-29 12:59:43,984 INFO [M:0;jenkins-hbase4:43689] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f06f6ebcace430c9d0d7a2c920da943 2023-05-29 12:59:43,985 DEBUG [M:0;jenkins-hbase4:43689] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4f06f6ebcace430c9d0d7a2c920da943 as hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4f06f6ebcace430c9d0d7a2c920da943 2023-05-29 12:59:43,990 INFO [M:0;jenkins-hbase4:43689] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f06f6ebcace430c9d0d7a2c920da943 2023-05-29 12:59:43,990 INFO [M:0;jenkins-hbase4:43689] regionserver.HStore(1080): Added hdfs://localhost:38947/user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4f06f6ebcace430c9d0d7a2c920da943, entries=11, sequenceid=100, filesize=6.1 K 2023-05-29 12:59:43,991 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=100, compaction requested=false 2023-05-29 12:59:43,992 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:43,992 DEBUG [M:0;jenkins-hbase4:43689] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:59:43,992 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0c452620-2e70-20b7-ba78-06ac5af1af6b/MasterData/WALs/jenkins-hbase4.apache.org,43689,1685365122163 2023-05-29 12:59:43,994 INFO [M:0;jenkins-hbase4:43689] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 12:59:43,994 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 12:59:43,995 INFO [M:0;jenkins-hbase4:43689] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43689 2023-05-29 12:59:43,997 DEBUG [M:0;jenkins-hbase4:43689] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43689,1685365122163 already deleted, retry=false 2023-05-29 12:59:44,065 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:59:44,065 INFO [RS:0;jenkins-hbase4:37819] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37819,1685365122201; zookeeper connection closed. 2023-05-29 12:59:44,065 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): regionserver:37819-0x100770567640001, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:59:44,065 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6637b962] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6637b962 2023-05-29 12:59:44,065 INFO [Listener at localhost/38477] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 12:59:44,165 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:59:44,165 INFO [M:0;jenkins-hbase4:43689] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43689,1685365122163; zookeeper connection closed. 2023-05-29 12:59:44,165 DEBUG [Listener at localhost/38477-EventThread] zookeeper.ZKWatcher(600): master:43689-0x100770567640000, quorum=127.0.0.1:62871, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 12:59:44,166 WARN [Listener at localhost/38477] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:59:44,170 INFO [Listener at localhost/38477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:59:44,274 WARN [BP-766987850-172.31.14.131-1685365121626 heartbeating to localhost/127.0.0.1:38947] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:59:44,274 WARN [BP-766987850-172.31.14.131-1685365121626 heartbeating to localhost/127.0.0.1:38947] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-766987850-172.31.14.131-1685365121626 (Datanode Uuid 6b7bdf10-1620-4334-a697-45e893beb643) service to localhost/127.0.0.1:38947 2023-05-29 12:59:44,275 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/dfs/data/data3/current/BP-766987850-172.31.14.131-1685365121626] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:59:44,276 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/dfs/data/data4/current/BP-766987850-172.31.14.131-1685365121626] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:59:44,277 WARN [Listener at localhost/38477] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 12:59:44,281 INFO [Listener at localhost/38477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:59:44,386 WARN [BP-766987850-172.31.14.131-1685365121626 heartbeating to localhost/127.0.0.1:38947] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 12:59:44,386 WARN [BP-766987850-172.31.14.131-1685365121626 heartbeating to localhost/127.0.0.1:38947] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-766987850-172.31.14.131-1685365121626 (Datanode Uuid a81f7297-bd3c-4abf-a833-12d99bd0595a) service to localhost/127.0.0.1:38947 2023-05-29 12:59:44,387 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/dfs/data/data1/current/BP-766987850-172.31.14.131-1685365121626] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:59:44,387 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/cluster_973ea724-78f7-8ec9-b3ec-235fbbbb3474/dfs/data/data2/current/BP-766987850-172.31.14.131-1685365121626] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 12:59:44,399 INFO [Listener at localhost/38477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 12:59:44,457 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 12:59:44,511 INFO [Listener at localhost/38477] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 12:59:44,527 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 12:59:44,537 INFO [Listener at localhost/38477] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=94 (was 87) - Thread LEAK? -, OpenFileDescriptor=498 (was 461) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=47 (was 69), ProcessCount=168 (was 167) - ProcessCount LEAK? -, AvailableMemoryMB=2918 (was 3098) 2023-05-29 12:59:44,546 INFO [Listener at localhost/38477] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=95, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=47, ProcessCount=168, AvailableMemoryMB=2919 2023-05-29 12:59:44,546 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/hadoop.log.dir so I do NOT create it in target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/03fac973-5717-a151-63df-49e74cc94184/hadoop.tmp.dir so I do NOT create it in target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880, deleteOnExit=true 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/test.cache.data in system properties and HBase conf 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/hadoop.log.dir in system properties and HBase conf 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 12:59:44,547 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 12:59:44,547 DEBUG [Listener at localhost/38477] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 12:59:44,548 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/nfs.dump.dir in system properties and HBase conf 2023-05-29 12:59:44,549 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/java.io.tmpdir in system properties and HBase conf 2023-05-29 12:59:44,549 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 12:59:44,549 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 12:59:44,549 INFO [Listener at localhost/38477] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 12:59:44,550 WARN [Listener at localhost/38477] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:59:44,553 WARN [Listener at localhost/38477] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:59:44,553 WARN [Listener at localhost/38477] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:59:44,592 WARN [Listener at localhost/38477] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:59:44,594 INFO [Listener at localhost/38477] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:59:44,598 INFO [Listener at localhost/38477] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/java.io.tmpdir/Jetty_localhost_34785_hdfs____tm1ycd/webapp 2023-05-29 12:59:44,688 INFO [Listener at localhost/38477] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34785 2023-05-29 12:59:44,689 WARN [Listener at localhost/38477] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 12:59:44,692 WARN [Listener at localhost/38477] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 12:59:44,692 WARN [Listener at localhost/38477] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 12:59:44,732 WARN [Listener at localhost/35585] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:59:44,742 WARN [Listener at localhost/35585] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:59:44,744 WARN [Listener at localhost/35585] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:59:44,745 INFO [Listener at localhost/35585] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:59:44,749 INFO [Listener at localhost/35585] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/java.io.tmpdir/Jetty_localhost_34397_datanode____.6eidu1/webapp 2023-05-29 12:59:44,839 INFO [Listener at localhost/35585] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34397 2023-05-29 12:59:44,845 WARN [Listener at localhost/40251] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:59:44,854 WARN [Listener at localhost/40251] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 12:59:44,857 WARN [Listener at localhost/40251] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 12:59:44,858 INFO [Listener at localhost/40251] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 12:59:44,860 INFO [Listener at localhost/40251] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/java.io.tmpdir/Jetty_localhost_41913_datanode____.6xaxln/webapp 2023-05-29 12:59:44,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdae2660bfa8f2724: Processing first storage report for DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b from datanode d7d4900f-1bfd-4bdf-82d8-fb3c6098be77 2023-05-29 12:59:44,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdae2660bfa8f2724: from storage DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b node DatanodeRegistration(127.0.0.1:44399, datanodeUuid=d7d4900f-1bfd-4bdf-82d8-fb3c6098be77, infoPort=36621, infoSecurePort=0, ipcPort=40251, storageInfo=lv=-57;cid=testClusterID;nsid=508492486;c=1685365184555), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:59:44,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdae2660bfa8f2724: Processing first storage report for DS-ed80eb67-8dc8-4830-91b7-a3a14cbde6ef from datanode d7d4900f-1bfd-4bdf-82d8-fb3c6098be77 2023-05-29 12:59:44,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdae2660bfa8f2724: from storage DS-ed80eb67-8dc8-4830-91b7-a3a14cbde6ef node DatanodeRegistration(127.0.0.1:44399, datanodeUuid=d7d4900f-1bfd-4bdf-82d8-fb3c6098be77, infoPort=36621, infoSecurePort=0, ipcPort=40251, storageInfo=lv=-57;cid=testClusterID;nsid=508492486;c=1685365184555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:59:44,958 INFO [Listener at localhost/40251] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41913 2023-05-29 12:59:44,964 WARN [Listener at localhost/46451] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 12:59:45,049 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc106c407a501f6c7: Processing first storage report for DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06 from datanode 8d45eb5c-6199-4404-ba88-5a951b933e91 2023-05-29 12:59:45,050 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc106c407a501f6c7: from storage DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06 node DatanodeRegistration(127.0.0.1:45935, datanodeUuid=8d45eb5c-6199-4404-ba88-5a951b933e91, infoPort=35609, infoSecurePort=0, ipcPort=46451, storageInfo=lv=-57;cid=testClusterID;nsid=508492486;c=1685365184555), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 12:59:45,050 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc106c407a501f6c7: Processing first storage report for DS-8e907bd5-2e93-4e37-8aa0-b154b3e5a633 from datanode 8d45eb5c-6199-4404-ba88-5a951b933e91 2023-05-29 12:59:45,050 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc106c407a501f6c7: from storage DS-8e907bd5-2e93-4e37-8aa0-b154b3e5a633 node DatanodeRegistration(127.0.0.1:45935, datanodeUuid=8d45eb5c-6199-4404-ba88-5a951b933e91, infoPort=35609, infoSecurePort=0, ipcPort=46451, storageInfo=lv=-57;cid=testClusterID;nsid=508492486;c=1685365184555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 12:59:45,071 DEBUG [Listener at localhost/46451] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54 2023-05-29 12:59:45,073 INFO [Listener at localhost/46451] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/zookeeper_0, clientPort=57344, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 12:59:45,074 INFO [Listener at localhost/46451] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57344 2023-05-29 12:59:45,074 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,075 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,088 INFO [Listener at localhost/46451] util.FSUtils(471): Created version file at hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665 with version=8 2023-05-29 12:59:45,088 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase-staging 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:59:45,090 INFO [Listener at localhost/46451] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:59:45,091 INFO [Listener at localhost/46451] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43673 2023-05-29 12:59:45,092 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,092 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,093 INFO [Listener at localhost/46451] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43673 connecting to ZooKeeper ensemble=127.0.0.1:57344 2023-05-29 12:59:45,099 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:436730x0, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:59:45,099 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43673-0x10077065d330000 connected 2023-05-29 12:59:45,113 DEBUG [Listener at localhost/46451] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:59:45,113 DEBUG [Listener at localhost/46451] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:59:45,113 DEBUG [Listener at localhost/46451] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:59:45,114 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43673 2023-05-29 12:59:45,114 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43673 2023-05-29 12:59:45,114 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43673 2023-05-29 12:59:45,114 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43673 2023-05-29 12:59:45,114 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43673 2023-05-29 12:59:45,115 INFO [Listener at localhost/46451] master.HMaster(444): hbase.rootdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665, hbase.cluster.distributed=false 2023-05-29 12:59:45,127 INFO [Listener at localhost/46451] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 12:59:45,128 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:59:45,128 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 12:59:45,128 INFO [Listener at localhost/46451] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 12:59:45,128 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 12:59:45,128 INFO [Listener at localhost/46451] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 12:59:45,128 INFO [Listener at localhost/46451] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 12:59:45,129 INFO [Listener at localhost/46451] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33891 2023-05-29 12:59:45,130 INFO [Listener at localhost/46451] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 12:59:45,130 DEBUG [Listener at localhost/46451] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 12:59:45,131 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,132 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,133 INFO [Listener at localhost/46451] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33891 connecting to ZooKeeper ensemble=127.0.0.1:57344 2023-05-29 12:59:45,135 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:338910x0, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 12:59:45,136 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33891-0x10077065d330001 connected 2023-05-29 12:59:45,137 DEBUG [Listener at localhost/46451] zookeeper.ZKUtil(164): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 12:59:45,137 DEBUG [Listener at localhost/46451] zookeeper.ZKUtil(164): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 12:59:45,138 DEBUG [Listener at localhost/46451] zookeeper.ZKUtil(164): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 12:59:45,138 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33891 2023-05-29 12:59:45,138 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33891 2023-05-29 12:59:45,138 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33891 2023-05-29 12:59:45,143 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33891 2023-05-29 12:59:45,143 DEBUG [Listener at localhost/46451] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33891 2023-05-29 12:59:45,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,147 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:59:45,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,148 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:59:45,148 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 12:59:45,148 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:59:45,150 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 12:59:45,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43673,1685365185089 from backup master directory 2023-05-29 12:59:45,151 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,151 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 12:59:45,151 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:59:45,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,162 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/hbase.id with ID: 8df77c82-f630-40f2-abe8-14df3b3419f7 2023-05-29 12:59:45,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:45,174 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5a0af7b5 to 127.0.0.1:57344 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:59:45,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35b325c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:59:45,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:59:45,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 12:59:45,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:59:45,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store-tmp 2023-05-29 12:59:45,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:45,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 12:59:45,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:45,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:45,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 12:59:45,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:45,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 12:59:45,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:59:45,195 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/WALs/jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43673%2C1685365185089, suffix=, logDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/WALs/jenkins-hbase4.apache.org,43673,1685365185089, archiveDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/oldWALs, maxLogs=10 2023-05-29 12:59:45,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/WALs/jenkins-hbase4.apache.org,43673,1685365185089/jenkins-hbase4.apache.org%2C43673%2C1685365185089.1685365185197 2023-05-29 12:59:45,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45935,DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06,DISK], DatanodeInfoWithStorage[127.0.0.1:44399,DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b,DISK]] 2023-05-29 12:59:45,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:59:45,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:45,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:59:45,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:59:45,206 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:59:45,207 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 12:59:45,207 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 12:59:45,208 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:59:45,209 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:59:45,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 12:59:45,214 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:59:45,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=692231, jitterRate=-0.1197834312915802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:59:45,214 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 12:59:45,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 12:59:45,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 12:59:45,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 12:59:45,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 12:59:45,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 12:59:45,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 12:59:45,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 12:59:45,217 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 12:59:45,218 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 12:59:45,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 12:59:45,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 12:59:45,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 12:59:45,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 12:59:45,229 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 12:59:45,231 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 12:59:45,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 12:59:45,232 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 12:59:45,233 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:59:45,233 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 12:59:45,233 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43673,1685365185089, sessionid=0x10077065d330000, setting cluster-up flag (Was=false) 2023-05-29 12:59:45,238 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 12:59:45,242 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,245 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 12:59:45,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:45,250 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.hbase-snapshot/.tmp 2023-05-29 12:59:45,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:59:45,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685365215254 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 12:59:45,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 12:59:45,255 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:59:45,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 12:59:45,255 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 12:59:45,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 12:59:45,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 12:59:45,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 12:59:45,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365185255,5,FailOnTimeoutGroup] 2023-05-29 12:59:45,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365185256,5,FailOnTimeoutGroup] 2023-05-29 12:59:45,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 12:59:45,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,256 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:59:45,268 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:59:45,268 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 12:59:45,268 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665 2023-05-29 12:59:45,274 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:45,275 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:59:45,276 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info 2023-05-29 12:59:45,276 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:59:45,277 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,277 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:59:45,278 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:59:45,279 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:59:45,279 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,279 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:59:45,280 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/table 2023-05-29 12:59:45,281 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:59:45,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,282 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740 2023-05-29 12:59:45,282 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740 2023-05-29 12:59:45,285 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:59:45,286 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:59:45,288 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:59:45,288 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=726175, jitterRate=-0.07662074267864227}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:59:45,288 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:59:45,289 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 12:59:45,289 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 12:59:45,289 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 12:59:45,289 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 12:59:45,289 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 12:59:45,289 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 12:59:45,289 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 12:59:45,290 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 12:59:45,290 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 12:59:45,290 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 12:59:45,292 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 12:59:45,293 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 12:59:45,345 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(951): ClusterId : 8df77c82-f630-40f2-abe8-14df3b3419f7 2023-05-29 12:59:45,346 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 12:59:45,348 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 12:59:45,348 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 12:59:45,350 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 12:59:45,351 DEBUG [RS:0;jenkins-hbase4:33891] zookeeper.ReadOnlyZKClient(139): Connect 0x0fe3b1ca to 127.0.0.1:57344 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:59:45,355 DEBUG [RS:0;jenkins-hbase4:33891] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1245e46, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:59:45,355 DEBUG [RS:0;jenkins-hbase4:33891] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51bdaba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 12:59:45,364 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33891 2023-05-29 12:59:45,364 INFO [RS:0;jenkins-hbase4:33891] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 12:59:45,364 INFO [RS:0;jenkins-hbase4:33891] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 12:59:45,364 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 12:59:45,364 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,43673,1685365185089 with isa=jenkins-hbase4.apache.org/172.31.14.131:33891, startcode=1685365185127 2023-05-29 12:59:45,365 DEBUG [RS:0;jenkins-hbase4:33891] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 12:59:45,368 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54777, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 12:59:45,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,369 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665 2023-05-29 12:59:45,369 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35585 2023-05-29 12:59:45,369 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 12:59:45,372 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 12:59:45,372 DEBUG [RS:0;jenkins-hbase4:33891] zookeeper.ZKUtil(162): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,373 WARN [RS:0;jenkins-hbase4:33891] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 12:59:45,373 INFO [RS:0;jenkins-hbase4:33891] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:59:45,373 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,373 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33891,1685365185127] 2023-05-29 12:59:45,376 DEBUG [RS:0;jenkins-hbase4:33891] zookeeper.ZKUtil(162): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,377 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 12:59:45,377 INFO [RS:0;jenkins-hbase4:33891] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 12:59:45,378 INFO [RS:0;jenkins-hbase4:33891] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 12:59:45,380 INFO [RS:0;jenkins-hbase4:33891] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 12:59:45,380 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,380 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 12:59:45,381 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,381 DEBUG [RS:0;jenkins-hbase4:33891] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 12:59:45,382 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,382 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,382 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,392 INFO [RS:0;jenkins-hbase4:33891] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 12:59:45,393 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33891,1685365185127-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,402 INFO [RS:0;jenkins-hbase4:33891] regionserver.Replication(203): jenkins-hbase4.apache.org,33891,1685365185127 started 2023-05-29 12:59:45,403 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33891,1685365185127, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33891, sessionid=0x10077065d330001 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33891,1685365185127' 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33891,1685365185127' 2023-05-29 12:59:45,403 DEBUG [RS:0;jenkins-hbase4:33891] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 12:59:45,404 DEBUG [RS:0;jenkins-hbase4:33891] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 12:59:45,404 DEBUG [RS:0;jenkins-hbase4:33891] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 12:59:45,404 INFO [RS:0;jenkins-hbase4:33891] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 12:59:45,404 INFO [RS:0;jenkins-hbase4:33891] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 12:59:45,443 DEBUG [jenkins-hbase4:43673] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 12:59:45,444 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33891,1685365185127, state=OPENING 2023-05-29 12:59:45,446 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 12:59:45,447 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:45,447 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:59:45,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33891,1685365185127}] 2023-05-29 12:59:45,506 INFO [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33891%2C1685365185127, suffix=, logDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127, archiveDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/oldWALs, maxLogs=32 2023-05-29 12:59:45,516 INFO [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365185506 2023-05-29 12:59:45,516 DEBUG [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44399,DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b,DISK], DatanodeInfoWithStorage[127.0.0.1:45935,DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06,DISK]] 2023-05-29 12:59:45,602 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,602 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 12:59:45,605 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 12:59:45,608 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 12:59:45,609 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 12:59:45,610 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33891%2C1685365185127.meta, suffix=.meta, logDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127, archiveDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/oldWALs, maxLogs=32 2023-05-29 12:59:45,618 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.meta.1685365185611.meta 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44399,DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b,DISK], DatanodeInfoWithStorage[127.0.0.1:45935,DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06,DISK]] 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 12:59:45,618 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:45,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 12:59:45,619 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 12:59:45,620 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 12:59:45,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info 2023-05-29 12:59:45,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info 2023-05-29 12:59:45,621 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 12:59:45,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 12:59:45,623 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:59:45,623 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/rep_barrier 2023-05-29 12:59:45,623 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 12:59:45,624 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,624 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 12:59:45,624 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/table 2023-05-29 12:59:45,624 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/table 2023-05-29 12:59:45,625 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 12:59:45,625 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:45,626 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740 2023-05-29 12:59:45,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740 2023-05-29 12:59:45,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 12:59:45,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 12:59:45,631 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=825324, jitterRate=0.04945485293865204}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 12:59:45,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 12:59:45,633 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685365185602 2023-05-29 12:59:45,636 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 12:59:45,636 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 12:59:45,637 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33891,1685365185127, state=OPEN 2023-05-29 12:59:45,638 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 12:59:45,639 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 12:59:45,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 12:59:45,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33891,1685365185127 in 192 msec 2023-05-29 12:59:45,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 12:59:45,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 351 msec 2023-05-29 12:59:45,645 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 393 msec 2023-05-29 12:59:45,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685365185645, completionTime=-1 2023-05-29 12:59:45,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 12:59:45,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 12:59:45,648 DEBUG [hconnection-0x63475a7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:59:45,650 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36774, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:59:45,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 12:59:45,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685365245651 2023-05-29 12:59:45,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685365305651 2023-05-29 12:59:45,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-29 12:59:45,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43673,1685365185089-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43673,1685365185089-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43673,1685365185089-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43673, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 12:59:45,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 12:59:45,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 12:59:45,660 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 12:59:45,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 12:59:45,662 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:59:45,662 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:59:45,664 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:45,664 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d empty. 2023-05-29 12:59:45,665 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:45,665 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 12:59:45,674 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 12:59:45,675 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3545d098cc2beda6407baa8daa20a51d, NAME => 'hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp 2023-05-29 12:59:45,682 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:45,682 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 3545d098cc2beda6407baa8daa20a51d, disabling compactions & flushes 2023-05-29 12:59:45,682 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:45,682 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:45,682 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. after waiting 0 ms 2023-05-29 12:59:45,682 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:45,682 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:45,682 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 3545d098cc2beda6407baa8daa20a51d: 2023-05-29 12:59:45,684 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:59:45,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365185685"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365185685"}]},"ts":"1685365185685"} 2023-05-29 12:59:45,687 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:59:45,688 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:59:45,688 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365185688"}]},"ts":"1685365185688"} 2023-05-29 12:59:45,690 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 12:59:45,696 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3545d098cc2beda6407baa8daa20a51d, ASSIGN}] 2023-05-29 12:59:45,698 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3545d098cc2beda6407baa8daa20a51d, ASSIGN 2023-05-29 12:59:45,699 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=3545d098cc2beda6407baa8daa20a51d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33891,1685365185127; forceNewPlan=false, retain=false 2023-05-29 12:59:45,850 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3545d098cc2beda6407baa8daa20a51d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:45,850 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365185850"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365185850"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365185850"}]},"ts":"1685365185850"} 2023-05-29 12:59:45,852 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 3545d098cc2beda6407baa8daa20a51d, server=jenkins-hbase4.apache.org,33891,1685365185127}] 2023-05-29 12:59:46,008 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:46,008 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3545d098cc2beda6407baa8daa20a51d, NAME => 'hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:59:46,009 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,009 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:46,009 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,009 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,010 INFO [StoreOpener-3545d098cc2beda6407baa8daa20a51d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,012 DEBUG [StoreOpener-3545d098cc2beda6407baa8daa20a51d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/info 2023-05-29 12:59:46,012 DEBUG [StoreOpener-3545d098cc2beda6407baa8daa20a51d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/info 2023-05-29 12:59:46,012 INFO [StoreOpener-3545d098cc2beda6407baa8daa20a51d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3545d098cc2beda6407baa8daa20a51d columnFamilyName info 2023-05-29 12:59:46,013 INFO [StoreOpener-3545d098cc2beda6407baa8daa20a51d-1] regionserver.HStore(310): Store=3545d098cc2beda6407baa8daa20a51d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:46,014 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,014 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3545d098cc2beda6407baa8daa20a51d 2023-05-29 12:59:46,020 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:59:46,021 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3545d098cc2beda6407baa8daa20a51d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852695, jitterRate=0.08425839245319366}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:59:46,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3545d098cc2beda6407baa8daa20a51d: 2023-05-29 12:59:46,023 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d., pid=6, masterSystemTime=1685365186005 2023-05-29 12:59:46,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:46,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 12:59:46,026 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3545d098cc2beda6407baa8daa20a51d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:46,026 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365186026"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365186026"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365186026"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365186026"}]},"ts":"1685365186026"} 2023-05-29 12:59:46,030 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 12:59:46,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 3545d098cc2beda6407baa8daa20a51d, server=jenkins-hbase4.apache.org,33891,1685365185127 in 176 msec 2023-05-29 12:59:46,033 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 12:59:46,033 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=3545d098cc2beda6407baa8daa20a51d, ASSIGN in 334 msec 2023-05-29 12:59:46,034 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:59:46,034 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365186034"}]},"ts":"1685365186034"} 2023-05-29 12:59:46,035 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 12:59:46,038 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:59:46,039 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 379 msec 2023-05-29 12:59:46,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 12:59:46,062 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:59:46,062 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:46,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 12:59:46,073 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:59:46,078 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-29 12:59:46,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 12:59:46,094 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 12:59:46,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-29 12:59:46,102 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 12:59:46,104 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 12:59:46,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.953sec 2023-05-29 12:59:46,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 12:59:46,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 12:59:46,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 12:59:46,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43673,1685365185089-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 12:59:46,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43673,1685365185089-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 12:59:46,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 12:59:46,146 DEBUG [Listener at localhost/46451] zookeeper.ReadOnlyZKClient(139): Connect 0x27afede1 to 127.0.0.1:57344 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 12:59:46,150 DEBUG [Listener at localhost/46451] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@199889da, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 12:59:46,151 DEBUG [hconnection-0x63710009-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 12:59:46,153 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36786, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 12:59:46,154 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 12:59:46,155 INFO [Listener at localhost/46451] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 12:59:46,159 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 12:59:46,159 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 12:59:46,159 INFO [Listener at localhost/46451] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 12:59:46,161 DEBUG [Listener at localhost/46451] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 12:59:46,163 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53226, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 12:59:46,165 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 12:59:46,165 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 12:59:46,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 12:59:46,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-29 12:59:46,171 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 12:59:46,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-29 12:59:46,173 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 12:59:46,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:59:46,174 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,175 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772 empty. 2023-05-29 12:59:46,175 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,175 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-29 12:59:46,185 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-29 12:59:46,186 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 58724a1b1084121c6c9f35ff5a00f772, NAME => 'TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/.tmp 2023-05-29 12:59:46,193 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:46,193 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 58724a1b1084121c6c9f35ff5a00f772, disabling compactions & flushes 2023-05-29 12:59:46,193 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,193 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,193 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. after waiting 0 ms 2023-05-29 12:59:46,193 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,193 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,193 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:46,196 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 12:59:46,196 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685365186196"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365186196"}]},"ts":"1685365186196"} 2023-05-29 12:59:46,198 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 12:59:46,199 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 12:59:46,199 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365186199"}]},"ts":"1685365186199"} 2023-05-29 12:59:46,200 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-29 12:59:46,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, ASSIGN}] 2023-05-29 12:59:46,205 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, ASSIGN 2023-05-29 12:59:46,206 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33891,1685365185127; forceNewPlan=false, retain=false 2023-05-29 12:59:46,357 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=58724a1b1084121c6c9f35ff5a00f772, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:46,357 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685365186357"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365186357"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365186357"}]},"ts":"1685365186357"} 2023-05-29 12:59:46,359 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 58724a1b1084121c6c9f35ff5a00f772, server=jenkins-hbase4.apache.org,33891,1685365185127}] 2023-05-29 12:59:46,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58724a1b1084121c6c9f35ff5a00f772, NAME => 'TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.', STARTKEY => '', ENDKEY => ''} 2023-05-29 12:59:46,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 12:59:46,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,515 INFO [StoreOpener-58724a1b1084121c6c9f35ff5a00f772-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,516 DEBUG [StoreOpener-58724a1b1084121c6c9f35ff5a00f772-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info 2023-05-29 12:59:46,516 DEBUG [StoreOpener-58724a1b1084121c6c9f35ff5a00f772-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info 2023-05-29 12:59:46,517 INFO [StoreOpener-58724a1b1084121c6c9f35ff5a00f772-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58724a1b1084121c6c9f35ff5a00f772 columnFamilyName info 2023-05-29 12:59:46,517 INFO [StoreOpener-58724a1b1084121c6c9f35ff5a00f772-1] regionserver.HStore(310): Store=58724a1b1084121c6c9f35ff5a00f772/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 12:59:46,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:46,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 12:59:46,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58724a1b1084121c6c9f35ff5a00f772; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=691952, jitterRate=-0.12013757228851318}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 12:59:46,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:46,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772., pid=11, masterSystemTime=1685365186510 2023-05-29 12:59:46,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:46,525 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=58724a1b1084121c6c9f35ff5a00f772, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 12:59:46,526 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685365186525"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365186525"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365186525"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365186525"}]},"ts":"1685365186525"} 2023-05-29 12:59:46,529 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 12:59:46,529 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 58724a1b1084121c6c9f35ff5a00f772, server=jenkins-hbase4.apache.org,33891,1685365185127 in 168 msec 2023-05-29 12:59:46,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 12:59:46,532 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, ASSIGN in 326 msec 2023-05-29 12:59:46,532 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 12:59:46,532 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365186532"}]},"ts":"1685365186532"} 2023-05-29 12:59:46,534 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-29 12:59:46,536 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 12:59:46,537 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 371 msec 2023-05-29 12:59:49,301 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 12:59:51,377 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 12:59:51,378 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 12:59:51,378 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-29 12:59:56,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43673] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 12:59:56,174 INFO [Listener at localhost/46451] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-29 12:59:56,177 DEBUG [Listener at localhost/46451] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-29 12:59:56,177 DEBUG [Listener at localhost/46451] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:56,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:56,189 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 58724a1b1084121c6c9f35ff5a00f772 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:59:56,201 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/2465a692734b4df28743e7279201d32c 2023-05-29 12:59:56,208 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/2465a692734b4df28743e7279201d32c as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2465a692734b4df28743e7279201d32c 2023-05-29 12:59:56,214 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2465a692734b4df28743e7279201d32c, entries=7, sequenceid=11, filesize=12.1 K 2023-05-29 12:59:56,215 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 58724a1b1084121c6c9f35ff5a00f772 in 26ms, sequenceid=11, compaction requested=false 2023-05-29 12:59:56,215 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:56,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:56,216 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 58724a1b1084121c6c9f35ff5a00f772 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-05-29 12:59:56,228 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=35 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/534dc8c3da0b4d779550cd56bb4d1138 2023-05-29 12:59:56,234 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/534dc8c3da0b4d779550cd56bb4d1138 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138 2023-05-29 12:59:56,238 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138, entries=21, sequenceid=35, filesize=26.9 K 2023-05-29 12:59:56,239 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=4.20 KB/4304 for 58724a1b1084121c6c9f35ff5a00f772 in 23ms, sequenceid=35, compaction requested=false 2023-05-29 12:59:56,239 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:56,239 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.0 K, sizeToCheck=16.0 K 2023-05-29 12:59:56,239 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:59:56,239 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138 because midkey is the same as first or last row 2023-05-29 12:59:58,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:58,225 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 58724a1b1084121c6c9f35ff5a00f772 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 12:59:58,236 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=45 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/7d873d13d36c46f09a1e8fe52db950b8 2023-05-29 12:59:58,242 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/7d873d13d36c46f09a1e8fe52db950b8 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/7d873d13d36c46f09a1e8fe52db950b8 2023-05-29 12:59:58,248 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/7d873d13d36c46f09a1e8fe52db950b8, entries=7, sequenceid=45, filesize=12.1 K 2023-05-29 12:59:58,249 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for 58724a1b1084121c6c9f35ff5a00f772 in 24ms, sequenceid=45, compaction requested=true 2023-05-29 12:59:58,249 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:58,249 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=51.1 K, sizeToCheck=16.0 K 2023-05-29 12:59:58,249 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:59:58,249 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138 because midkey is the same as first or last row 2023-05-29 12:59:58,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 12:59:58,249 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 12:59:58,250 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 12:59:58,250 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 58724a1b1084121c6c9f35ff5a00f772 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-29 12:59:58,252 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 52295 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 12:59:58,252 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 58724a1b1084121c6c9f35ff5a00f772/info is initiating minor compaction (all files) 2023-05-29 12:59:58,252 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 58724a1b1084121c6c9f35ff5a00f772/info in TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 12:59:58,253 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2465a692734b4df28743e7279201d32c, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/7d873d13d36c46f09a1e8fe52db950b8] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp, totalSize=51.1 K 2023-05-29 12:59:58,253 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 2465a692734b4df28743e7279201d32c, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685365196180 2023-05-29 12:59:58,254 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 534dc8c3da0b4d779550cd56bb4d1138, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=35, earliestPutTs=1685365196190 2023-05-29 12:59:58,255 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 7d873d13d36c46f09a1e8fe52db950b8, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=45, earliestPutTs=1685365196217 2023-05-29 12:59:58,281 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/ec9a5df52a7949aa9ee3b3241a0c6b83 2023-05-29 12:59:58,285 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 58724a1b1084121c6c9f35ff5a00f772#info#compaction#29 average throughput is 17.96 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 12:59:58,289 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/ec9a5df52a7949aa9ee3b3241a0c6b83 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/ec9a5df52a7949aa9ee3b3241a0c6b83 2023-05-29 12:59:58,302 WARN [DataStreamer for file /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/2d9a604c543e483abb47dfab3446c312] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-05-29 12:59:58,303 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/ec9a5df52a7949aa9ee3b3241a0c6b83, entries=18, sequenceid=66, filesize=23.7 K 2023-05-29 12:59:58,304 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 58724a1b1084121c6c9f35ff5a00f772 in 54ms, sequenceid=66, compaction requested=false 2023-05-29 12:59:58,305 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:58,305 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=74.8 K, sizeToCheck=16.0 K 2023-05-29 12:59:58,305 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:59:58,305 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138 because midkey is the same as first or last row 2023-05-29 12:59:58,308 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/2d9a604c543e483abb47dfab3446c312 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312 2023-05-29 12:59:58,314 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 58724a1b1084121c6c9f35ff5a00f772/info of 58724a1b1084121c6c9f35ff5a00f772 into 2d9a604c543e483abb47dfab3446c312(size=41.7 K), total size for store is 65.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 12:59:58,314 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 12:59:58,314 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772., storeName=58724a1b1084121c6c9f35ff5a00f772/info, priority=13, startTime=1685365198249; duration=0sec 2023-05-29 12:59:58,315 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=65.4 K, sizeToCheck=16.0 K 2023-05-29 12:59:58,315 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 12:59:58,315 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312 because midkey is the same as first or last row 2023-05-29 12:59:58,315 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:00,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,283 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 58724a1b1084121c6c9f35ff5a00f772 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-29 13:00:00,292 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=82 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/beadcc93c98d47fc98977e86c643b0b0 2023-05-29 13:00:00,298 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/beadcc93c98d47fc98977e86c643b0b0 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/beadcc93c98d47fc98977e86c643b0b0 2023-05-29 13:00:00,303 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=58724a1b1084121c6c9f35ff5a00f772, server=jenkins-hbase4.apache.org,33891,1685365185127 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 13:00:00,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 91 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365210303, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=58724a1b1084121c6c9f35ff5a00f772, server=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:00,304 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/beadcc93c98d47fc98977e86c643b0b0, entries=12, sequenceid=82, filesize=17.4 K 2023-05-29 13:00:00,305 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=17.86 KB/18292 for 58724a1b1084121c6c9f35ff5a00f772 in 22ms, sequenceid=82, compaction requested=true 2023-05-29 13:00:00,305 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 13:00:00,305 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=82.8 K, sizeToCheck=16.0 K 2023-05-29 13:00:00,305 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 13:00:00,305 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312 because midkey is the same as first or last row 2023-05-29 13:00:00,305 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:00,305 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:00:00,306 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 84764 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:00:00,306 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 58724a1b1084121c6c9f35ff5a00f772/info is initiating minor compaction (all files) 2023-05-29 13:00:00,306 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 58724a1b1084121c6c9f35ff5a00f772/info in TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 13:00:00,307 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/ec9a5df52a7949aa9ee3b3241a0c6b83, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/beadcc93c98d47fc98977e86c643b0b0] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp, totalSize=82.8 K 2023-05-29 13:00:00,307 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 2d9a604c543e483abb47dfab3446c312, keycount=35, bloomtype=ROW, size=41.7 K, encoding=NONE, compression=NONE, seqNum=45, earliestPutTs=1685365196180 2023-05-29 13:00:00,307 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting ec9a5df52a7949aa9ee3b3241a0c6b83, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=66, earliestPutTs=1685365198226 2023-05-29 13:00:00,308 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting beadcc93c98d47fc98977e86c643b0b0, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685365198251 2023-05-29 13:00:00,318 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 58724a1b1084121c6c9f35ff5a00f772#info#compaction#31 average throughput is 33.35 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:00:00,331 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/699bce38f4644773a9f04efdd542469e as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e 2023-05-29 13:00:00,337 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 58724a1b1084121c6c9f35ff5a00f772/info of 58724a1b1084121c6c9f35ff5a00f772 into 699bce38f4644773a9f04efdd542469e(size=73.5 K), total size for store is 73.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:00:00,337 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 13:00:00,337 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772., storeName=58724a1b1084121c6c9f35ff5a00f772/info, priority=13, startTime=1685365200305; duration=0sec 2023-05-29 13:00:00,337 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=73.5 K, sizeToCheck=16.0 K 2023-05-29 13:00:00,337 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 13:00:00,338 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:00,338 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:00,339 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43673] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,33891,1685365185127, parent={ENCODED => 58724a1b1084121c6c9f35ff5a00f772, NAME => 'TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-29 13:00:00,346 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43673] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:00,352 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43673] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=58724a1b1084121c6c9f35ff5a00f772, daughterA=330c5ee3d16c7b817ba0762e5abf2144, daughterB=39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:00,353 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=58724a1b1084121c6c9f35ff5a00f772, daughterA=330c5ee3d16c7b817ba0762e5abf2144, daughterB=39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:00,353 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=58724a1b1084121c6c9f35ff5a00f772, daughterA=330c5ee3d16c7b817ba0762e5abf2144, daughterB=39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:00,353 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=58724a1b1084121c6c9f35ff5a00f772, daughterA=330c5ee3d16c7b817ba0762e5abf2144, daughterB=39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:00,360 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, UNASSIGN}] 2023-05-29 13:00:00,362 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, UNASSIGN 2023-05-29 13:00:00,362 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=58724a1b1084121c6c9f35ff5a00f772, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:00,363 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685365200362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365200362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365200362"}]},"ts":"1685365200362"} 2023-05-29 13:00:00,364 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 58724a1b1084121c6c9f35ff5a00f772, server=jenkins-hbase4.apache.org,33891,1685365185127}] 2023-05-29 13:00:00,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58724a1b1084121c6c9f35ff5a00f772, disabling compactions & flushes 2023-05-29 13:00:00,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 13:00:00,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 13:00:00,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. after waiting 0 ms 2023-05-29 13:00:00,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 13:00:00,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 58724a1b1084121c6c9f35ff5a00f772 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-29 13:00:00,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=103 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/a2dc0df096eb4c2a9f3cff2bd55c370b 2023-05-29 13:00:00,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.tmp/info/a2dc0df096eb4c2a9f3cff2bd55c370b as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/a2dc0df096eb4c2a9f3cff2bd55c370b 2023-05-29 13:00:00,542 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/a2dc0df096eb4c2a9f3cff2bd55c370b, entries=17, sequenceid=103, filesize=22.6 K 2023-05-29 13:00:00,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=0 B/0 for 58724a1b1084121c6c9f35ff5a00f772 in 21ms, sequenceid=103, compaction requested=false 2023-05-29 13:00:00,549 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2465a692734b4df28743e7279201d32c, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/7d873d13d36c46f09a1e8fe52db950b8, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/ec9a5df52a7949aa9ee3b3241a0c6b83, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/beadcc93c98d47fc98977e86c643b0b0] to archive 2023-05-29 13:00:00,549 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 13:00:00,551 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2465a692734b4df28743e7279201d32c to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2465a692734b4df28743e7279201d32c 2023-05-29 13:00:00,552 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/534dc8c3da0b4d779550cd56bb4d1138 2023-05-29 13:00:00,554 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/2d9a604c543e483abb47dfab3446c312 2023-05-29 13:00:00,555 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/7d873d13d36c46f09a1e8fe52db950b8 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/7d873d13d36c46f09a1e8fe52db950b8 2023-05-29 13:00:00,556 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/ec9a5df52a7949aa9ee3b3241a0c6b83 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/ec9a5df52a7949aa9ee3b3241a0c6b83 2023-05-29 13:00:00,557 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/beadcc93c98d47fc98977e86c643b0b0 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/beadcc93c98d47fc98977e86c643b0b0 2023-05-29 13:00:00,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/recovered.edits/106.seqid, newMaxSeqId=106, maxSeqId=1 2023-05-29 13:00:00,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. 2023-05-29 13:00:00,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58724a1b1084121c6c9f35ff5a00f772: 2023-05-29 13:00:00,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,568 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=58724a1b1084121c6c9f35ff5a00f772, regionState=CLOSED 2023-05-29 13:00:00,568 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685365200568"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365200568"}]},"ts":"1685365200568"} 2023-05-29 13:00:00,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-29 13:00:00,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 58724a1b1084121c6c9f35ff5a00f772, server=jenkins-hbase4.apache.org,33891,1685365185127 in 206 msec 2023-05-29 13:00:00,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-29 13:00:00,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=58724a1b1084121c6c9f35ff5a00f772, UNASSIGN in 212 msec 2023-05-29 13:00:00,585 INFO [PEWorker-3] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=58724a1b1084121c6c9f35ff5a00f772, threads=2 2023-05-29 13:00:00,587 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e for region: 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,587 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/a2dc0df096eb4c2a9f3cff2bd55c370b for region: 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,596 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/a2dc0df096eb4c2a9f3cff2bd55c370b, top=true 2023-05-29 13:00:00,600 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/.splits/39e4914594287986759c4d3f07ca7705/info/TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b for child: 39e4914594287986759c4d3f07ca7705, parent: 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,600 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/a2dc0df096eb4c2a9f3cff2bd55c370b for region: 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,616 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e for region: 58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:00:00,616 DEBUG [PEWorker-3] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 58724a1b1084121c6c9f35ff5a00f772 Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-05-29 13:00:00,652 DEBUG [PEWorker-3] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/recovered.edits/106.seqid, newMaxSeqId=106, maxSeqId=-1 2023-05-29 13:00:00,655 DEBUG [PEWorker-3] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/recovered.edits/106.seqid, newMaxSeqId=106, maxSeqId=-1 2023-05-29 13:00:00,657 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685365200657"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685365200657"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685365200657"}]},"ts":"1685365200657"} 2023-05-29 13:00:00,657 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685365200657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365200657"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365200657"}]},"ts":"1685365200657"} 2023-05-29 13:00:00,657 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685365200657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365200657"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365200657"}]},"ts":"1685365200657"} 2023-05-29 13:00:00,698 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33891] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-29 13:00:00,699 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-29 13:00:00,699 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-29 13:00:00,707 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=330c5ee3d16c7b817ba0762e5abf2144, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=39e4914594287986759c4d3f07ca7705, ASSIGN}] 2023-05-29 13:00:00,708 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=330c5ee3d16c7b817ba0762e5abf2144, ASSIGN 2023-05-29 13:00:00,708 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=39e4914594287986759c4d3f07ca7705, ASSIGN 2023-05-29 13:00:00,709 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=330c5ee3d16c7b817ba0762e5abf2144, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,33891,1685365185127; forceNewPlan=false, retain=false 2023-05-29 13:00:00,709 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=39e4914594287986759c4d3f07ca7705, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,33891,1685365185127; forceNewPlan=false, retain=false 2023-05-29 13:00:00,710 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/.tmp/info/5ae7c9f9d1034e5894eea2789c9b39aa 2023-05-29 13:00:00,724 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/.tmp/table/a913a7abbfc24eae9b0a16fdc2f85c6c 2023-05-29 13:00:00,730 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/.tmp/info/5ae7c9f9d1034e5894eea2789c9b39aa as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info/5ae7c9f9d1034e5894eea2789c9b39aa 2023-05-29 13:00:00,734 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info/5ae7c9f9d1034e5894eea2789c9b39aa, entries=29, sequenceid=17, filesize=8.6 K 2023-05-29 13:00:00,735 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/.tmp/table/a913a7abbfc24eae9b0a16fdc2f85c6c as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/table/a913a7abbfc24eae9b0a16fdc2f85c6c 2023-05-29 13:00:00,740 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/table/a913a7abbfc24eae9b0a16fdc2f85c6c, entries=4, sequenceid=17, filesize=4.8 K 2023-05-29 13:00:00,741 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 42ms, sequenceid=17, compaction requested=false 2023-05-29 13:00:00,742 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-29 13:00:00,861 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=330c5ee3d16c7b817ba0762e5abf2144, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:00,861 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=39e4914594287986759c4d3f07ca7705, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:00,861 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685365200861"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365200861"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365200861"}]},"ts":"1685365200861"} 2023-05-29 13:00:00,861 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685365200861"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365200861"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365200861"}]},"ts":"1685365200861"} 2023-05-29 13:00:00,862 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure 330c5ee3d16c7b817ba0762e5abf2144, server=jenkins-hbase4.apache.org,33891,1685365185127}] 2023-05-29 13:00:00,863 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127}] 2023-05-29 13:00:01,017 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:00:01,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 330c5ee3d16c7b817ba0762e5abf2144, NAME => 'TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-29 13:00:01,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:00:01,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,019 INFO [StoreOpener-330c5ee3d16c7b817ba0762e5abf2144-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,020 DEBUG [StoreOpener-330c5ee3d16c7b817ba0762e5abf2144-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info 2023-05-29 13:00:01,020 DEBUG [StoreOpener-330c5ee3d16c7b817ba0762e5abf2144-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info 2023-05-29 13:00:01,020 INFO [StoreOpener-330c5ee3d16c7b817ba0762e5abf2144-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 330c5ee3d16c7b817ba0762e5abf2144 columnFamilyName info 2023-05-29 13:00:01,031 DEBUG [StoreOpener-330c5ee3d16c7b817ba0762e5abf2144-1] regionserver.HStore(539): loaded hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772->hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e-bottom 2023-05-29 13:00:01,032 INFO [StoreOpener-330c5ee3d16c7b817ba0762e5abf2144-1] regionserver.HStore(310): Store=330c5ee3d16c7b817ba0762e5abf2144/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:00:01,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:00:01,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 330c5ee3d16c7b817ba0762e5abf2144; next sequenceid=107; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=867346, jitterRate=0.1028883308172226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 13:00:01,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 330c5ee3d16c7b817ba0762e5abf2144: 2023-05-29 13:00:01,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144., pid=17, masterSystemTime=1685365201014 2023-05-29 13:00:01,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:01,038 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-29 13:00:01,039 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:00:01,039 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 330c5ee3d16c7b817ba0762e5abf2144/info is initiating minor compaction (all files) 2023-05-29 13:00:01,039 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 330c5ee3d16c7b817ba0762e5abf2144/info in TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:00:01,039 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772->hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e-bottom] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/.tmp, totalSize=73.5 K 2023-05-29 13:00:01,039 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685365196180 2023-05-29 13:00:01,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:00:01,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:00:01,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:01,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39e4914594287986759c4d3f07ca7705, NAME => 'TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-29 13:00:01,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:00:01,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,040 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=330c5ee3d16c7b817ba0762e5abf2144, regionState=OPEN, openSeqNum=107, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:01,040 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685365201040"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365201040"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365201040"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365201040"}]},"ts":"1685365201040"} 2023-05-29 13:00:01,041 INFO [StoreOpener-39e4914594287986759c4d3f07ca7705-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,042 DEBUG [StoreOpener-39e4914594287986759c4d3f07ca7705-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info 2023-05-29 13:00:01,042 DEBUG [StoreOpener-39e4914594287986759c4d3f07ca7705-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info 2023-05-29 13:00:01,043 INFO [StoreOpener-39e4914594287986759c4d3f07ca7705-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39e4914594287986759c4d3f07ca7705 columnFamilyName info 2023-05-29 13:00:01,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-29 13:00:01,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure 330c5ee3d16c7b817ba0762e5abf2144, server=jenkins-hbase4.apache.org,33891,1685365185127 in 180 msec 2023-05-29 13:00:01,046 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 330c5ee3d16c7b817ba0762e5abf2144#info#compaction#35 average throughput is 20.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:00:01,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=330c5ee3d16c7b817ba0762e5abf2144, ASSIGN in 337 msec 2023-05-29 13:00:01,053 DEBUG [StoreOpener-39e4914594287986759c4d3f07ca7705-1] regionserver.HStore(539): loaded hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772->hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e-top 2023-05-29 13:00:01,061 DEBUG [StoreOpener-39e4914594287986759c4d3f07ca7705-1] regionserver.HStore(539): loaded hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b 2023-05-29 13:00:01,061 INFO [StoreOpener-39e4914594287986759c4d3f07ca7705-1] regionserver.HStore(310): Store=39e4914594287986759c4d3f07ca7705/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:00:01,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,067 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/.tmp/info/58b88d7f54b74694927c546c8e425dcd as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info/58b88d7f54b74694927c546c8e425dcd 2023-05-29 13:00:01,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:01,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 39e4914594287986759c4d3f07ca7705; next sequenceid=107; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=728304, jitterRate=-0.07391436398029327}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 13:00:01,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:01,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., pid=18, masterSystemTime=1685365201014 2023-05-29 13:00:01,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:01,071 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-05-29 13:00:01,072 INFO [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:01,072 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:00:01,072 INFO [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:01,072 INFO [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772->hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e-top, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=96.1 K 2023-05-29 13:00:01,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:01,073 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] compactions.Compactor(207): Compacting 699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1685365196180 2023-05-29 13:00:01,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:01,073 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=39e4914594287986759c4d3f07ca7705, regionState=OPEN, openSeqNum=107, regionLocation=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:01,073 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b, keycount=17, bloomtype=ROW, size=22.6 K, encoding=NONE, compression=NONE, seqNum=103, earliestPutTs=1685365200283 2023-05-29 13:00:01,074 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685365201073"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365201073"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365201073"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365201073"}]},"ts":"1685365201073"} 2023-05-29 13:00:01,075 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 330c5ee3d16c7b817ba0762e5abf2144/info of 330c5ee3d16c7b817ba0762e5abf2144 into 58b88d7f54b74694927c546c8e425dcd(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:00:01,075 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 330c5ee3d16c7b817ba0762e5abf2144: 2023-05-29 13:00:01,075 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144., storeName=330c5ee3d16c7b817ba0762e5abf2144/info, priority=15, startTime=1685365201037; duration=0sec 2023-05-29 13:00:01,075 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:01,078 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-29 13:00:01,078 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 in 212 msec 2023-05-29 13:00:01,080 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-05-29 13:00:01,080 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=39e4914594287986759c4d3f07ca7705, ASSIGN in 371 msec 2023-05-29 13:00:01,081 INFO [RS:0;jenkins-hbase4:33891-longCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#36 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:00:01,082 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=58724a1b1084121c6c9f35ff5a00f772, daughterA=330c5ee3d16c7b817ba0762e5abf2144, daughterB=39e4914594287986759c4d3f07ca7705 in 734 msec 2023-05-29 13:00:01,097 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/8f89f8bb7a354e03be2dc8f7e45f8f07 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/8f89f8bb7a354e03be2dc8f7e45f8f07 2023-05-29 13:00:01,102 INFO [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into 8f89f8bb7a354e03be2dc8f7e45f8f07(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:00:01,103 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:01,103 INFO [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=14, startTime=1685365201069; duration=0sec 2023-05-29 13:00:01,103 DEBUG [RS:0;jenkins-hbase4:33891-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:06,122 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 13:00:10,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 93 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365220385, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685365186164.58724a1b1084121c6c9f35ff5a00f772. is not online on jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:30,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:30,570 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 13:00:30,584 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=117 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/440ccff8fe2a4413a8bde1d55b7b413c 2023-05-29 13:00:30,590 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/440ccff8fe2a4413a8bde1d55b7b413c as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/440ccff8fe2a4413a8bde1d55b7b413c 2023-05-29 13:00:30,594 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/440ccff8fe2a4413a8bde1d55b7b413c, entries=7, sequenceid=117, filesize=12.1 K 2023-05-29 13:00:30,595 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=7.36 KB/7532 for 39e4914594287986759c4d3f07ca7705 in 25ms, sequenceid=117, compaction requested=false 2023-05-29 13:00:30,595 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:31,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=30, reuseRatio=69.77% 2023-05-29 13:00:31,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-29 13:00:32,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:32,584 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-29 13:00:32,607 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=128 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/7105b1d181754d3f9dd913a055f32528 2023-05-29 13:00:32,612 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 13:00:32,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 133 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365242612, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:32,614 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/7105b1d181754d3f9dd913a055f32528 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7105b1d181754d3f9dd913a055f32528 2023-05-29 13:00:32,619 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7105b1d181754d3f9dd913a055f32528, entries=8, sequenceid=128, filesize=13.2 K 2023-05-29 13:00:32,620 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=22.07 KB/22596 for 39e4914594287986759c4d3f07ca7705 in 36ms, sequenceid=128, compaction requested=true 2023-05-29 13:00:32,620 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:32,620 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:32,620 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:00:32,621 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 53538 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:00:32,622 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:00:32,622 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:32,622 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/8f89f8bb7a354e03be2dc8f7e45f8f07, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/440ccff8fe2a4413a8bde1d55b7b413c, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7105b1d181754d3f9dd913a055f32528] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=52.3 K 2023-05-29 13:00:32,622 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 8f89f8bb7a354e03be2dc8f7e45f8f07, keycount=21, bloomtype=ROW, size=27.0 K, encoding=NONE, compression=NONE, seqNum=103, earliestPutTs=1685365198275 2023-05-29 13:00:32,623 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 440ccff8fe2a4413a8bde1d55b7b413c, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=117, earliestPutTs=1685365230563 2023-05-29 13:00:32,623 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 7105b1d181754d3f9dd913a055f32528, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=128, earliestPutTs=1685365230571 2023-05-29 13:00:32,633 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#39 average throughput is 36.94 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:00:32,647 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/0dd11053156f46a18a873d960297536c as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0dd11053156f46a18a873d960297536c 2023-05-29 13:00:32,653 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into 0dd11053156f46a18a873d960297536c(size=42.9 K), total size for store is 42.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:00:32,654 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:32,654 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=13, startTime=1685365232620; duration=0sec 2023-05-29 13:00:32,654 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:38,361 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 13:00:42,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:42,678 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-05-29 13:00:42,688 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 13:00:42,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 143 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365252688, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:42,692 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=154 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/754f03ef6cfb4371823eada5012331fc 2023-05-29 13:00:42,698 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/754f03ef6cfb4371823eada5012331fc as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/754f03ef6cfb4371823eada5012331fc 2023-05-29 13:00:42,704 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/754f03ef6cfb4371823eada5012331fc, entries=22, sequenceid=154, filesize=27.9 K 2023-05-29 13:00:42,705 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=7.36 KB/7532 for 39e4914594287986759c4d3f07ca7705 in 26ms, sequenceid=154, compaction requested=false 2023-05-29 13:00:42,705 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:52,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:52,776 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-29 13:00:52,786 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=165 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/7b73690b81a341e38d50942b091678ac 2023-05-29 13:00:52,792 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/7b73690b81a341e38d50942b091678ac as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7b73690b81a341e38d50942b091678ac 2023-05-29 13:00:52,798 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7b73690b81a341e38d50942b091678ac, entries=8, sequenceid=165, filesize=13.2 K 2023-05-29 13:00:52,799 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=1.05 KB/1076 for 39e4914594287986759c4d3f07ca7705 in 23ms, sequenceid=165, compaction requested=true 2023-05-29 13:00:52,799 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:52,799 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:52,799 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:00:52,800 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 86036 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:00:52,800 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:00:52,801 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:00:52,801 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0dd11053156f46a18a873d960297536c, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/754f03ef6cfb4371823eada5012331fc, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7b73690b81a341e38d50942b091678ac] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=84.0 K 2023-05-29 13:00:52,801 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 0dd11053156f46a18a873d960297536c, keycount=36, bloomtype=ROW, size=42.9 K, encoding=NONE, compression=NONE, seqNum=128, earliestPutTs=1685365198275 2023-05-29 13:00:52,801 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 754f03ef6cfb4371823eada5012331fc, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=154, earliestPutTs=1685365232585 2023-05-29 13:00:52,802 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 7b73690b81a341e38d50942b091678ac, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=165, earliestPutTs=1685365242679 2023-05-29 13:00:52,811 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#42 average throughput is 67.73 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:00:52,827 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/d8f66e27848d4ee3a3933ffd5f8d5553 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d8f66e27848d4ee3a3933ffd5f8d5553 2023-05-29 13:00:52,832 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into d8f66e27848d4ee3a3933ffd5f8d5553(size=74.7 K), total size for store is 74.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:00:52,832 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:00:52,832 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=13, startTime=1685365252799; duration=0sec 2023-05-29 13:00:52,832 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:00:54,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:00:54,784 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 13:00:54,795 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=176 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/0ed5cb619bb4410cba39dfd2953f7be1 2023-05-29 13:00:54,801 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/0ed5cb619bb4410cba39dfd2953f7be1 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0ed5cb619bb4410cba39dfd2953f7be1 2023-05-29 13:00:54,809 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 13:00:54,809 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0ed5cb619bb4410cba39dfd2953f7be1, entries=7, sequenceid=176, filesize=12.1 K 2023-05-29 13:00:54,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 175 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365264809, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:00:54,811 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 39e4914594287986759c4d3f07ca7705 in 26ms, sequenceid=176, compaction requested=false 2023-05-29 13:00:54,811 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:04,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:04,883 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-29 13:01:04,898 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=202 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/95a05a792a2b4c36ab1bee4ac74af646 2023-05-29 13:01:04,903 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/95a05a792a2b4c36ab1bee4ac74af646 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/95a05a792a2b4c36ab1bee4ac74af646 2023-05-29 13:01:04,909 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/95a05a792a2b4c36ab1bee4ac74af646, entries=23, sequenceid=202, filesize=29.0 K 2023-05-29 13:01:04,910 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=3.15 KB/3228 for 39e4914594287986759c4d3f07ca7705 in 27ms, sequenceid=202, compaction requested=true 2023-05-29 13:01:04,910 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:04,910 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:04,910 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:01:04,911 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 118619 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:01:04,911 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:01:04,911 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:04,911 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d8f66e27848d4ee3a3933ffd5f8d5553, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0ed5cb619bb4410cba39dfd2953f7be1, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/95a05a792a2b4c36ab1bee4ac74af646] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=115.8 K 2023-05-29 13:01:04,912 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting d8f66e27848d4ee3a3933ffd5f8d5553, keycount=66, bloomtype=ROW, size=74.7 K, encoding=NONE, compression=NONE, seqNum=165, earliestPutTs=1685365198275 2023-05-29 13:01:04,912 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 0ed5cb619bb4410cba39dfd2953f7be1, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1685365252777 2023-05-29 13:01:04,912 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 95a05a792a2b4c36ab1bee4ac74af646, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=202, earliestPutTs=1685365254785 2023-05-29 13:01:04,923 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#45 average throughput is 98.51 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:01:04,936 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/010200849a354c3a8b2e472821237531 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/010200849a354c3a8b2e472821237531 2023-05-29 13:01:04,942 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into 010200849a354c3a8b2e472821237531(size=106.4 K), total size for store is 106.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:01:04,942 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:04,942 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=13, startTime=1685365264910; duration=0sec 2023-05-29 13:01:04,942 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:06,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:06,896 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 13:01:06,912 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=213 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/f04a79677816453eab798df626b83eb2 2023-05-29 13:01:06,919 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/f04a79677816453eab798df626b83eb2 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/f04a79677816453eab798df626b83eb2 2023-05-29 13:01:06,920 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 13:01:06,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365276920, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:01:06,924 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/f04a79677816453eab798df626b83eb2, entries=7, sequenceid=213, filesize=12.1 K 2023-05-29 13:01:06,925 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 39e4914594287986759c4d3f07ca7705 in 29ms, sequenceid=213, compaction requested=false 2023-05-29 13:01:06,925 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:17,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:17,012 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-29 13:01:17,050 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=239 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/7aabf0cd15be4b8591dbcc775c0e78c6 2023-05-29 13:01:17,056 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/7aabf0cd15be4b8591dbcc775c0e78c6 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7aabf0cd15be4b8591dbcc775c0e78c6 2023-05-29 13:01:17,060 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7aabf0cd15be4b8591dbcc775c0e78c6, entries=23, sequenceid=239, filesize=29.0 K 2023-05-29 13:01:17,061 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=5.25 KB/5380 for 39e4914594287986759c4d3f07ca7705 in 49ms, sequenceid=239, compaction requested=true 2023-05-29 13:01:17,061 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:17,061 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:17,061 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:01:17,062 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 151069 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:01:17,062 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:01:17,062 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:17,063 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/010200849a354c3a8b2e472821237531, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/f04a79677816453eab798df626b83eb2, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7aabf0cd15be4b8591dbcc775c0e78c6] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=147.5 K 2023-05-29 13:01:17,063 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 010200849a354c3a8b2e472821237531, keycount=96, bloomtype=ROW, size=106.4 K, encoding=NONE, compression=NONE, seqNum=202, earliestPutTs=1685365198275 2023-05-29 13:01:17,063 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting f04a79677816453eab798df626b83eb2, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685365264884 2023-05-29 13:01:17,063 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 7aabf0cd15be4b8591dbcc775c0e78c6, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=239, earliestPutTs=1685365266896 2023-05-29 13:01:17,074 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#48 average throughput is 64.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:01:17,086 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/0e87b9aea84241a4a7c6a0cc73eb9b1e as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0e87b9aea84241a4a7c6a0cc73eb9b1e 2023-05-29 13:01:17,092 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into 0e87b9aea84241a4a7c6a0cc73eb9b1e(size=138.3 K), total size for store is 138.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:01:17,092 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:17,092 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=13, startTime=1685365277061; duration=0sec 2023-05-29 13:01:17,092 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:19,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:19,036 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 13:01:19,044 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=250 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/a9b6a412e8444b43a6206b49ebde9bf2 2023-05-29 13:01:19,050 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/a9b6a412e8444b43a6206b49ebde9bf2 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/a9b6a412e8444b43a6206b49ebde9bf2 2023-05-29 13:01:19,055 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/a9b6a412e8444b43a6206b49ebde9bf2, entries=7, sequenceid=250, filesize=12.1 K 2023-05-29 13:01:19,056 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 39e4914594287986759c4d3f07ca7705 in 20ms, sequenceid=250, compaction requested=false 2023-05-29 13:01:19,056 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:19,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:19,056 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-29 13:01:19,067 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 13:01:19,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] ipc.CallRunner(144): callId: 246 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36786 deadline: 1685365289066, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=39e4914594287986759c4d3f07ca7705, server=jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:01:19,068 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=272 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/9cc8bbb80565478faa2865456f1d9c62 2023-05-29 13:01:19,074 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/9cc8bbb80565478faa2865456f1d9c62 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/9cc8bbb80565478faa2865456f1d9c62 2023-05-29 13:01:19,078 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/9cc8bbb80565478faa2865456f1d9c62, entries=19, sequenceid=272, filesize=24.8 K 2023-05-29 13:01:19,079 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 39e4914594287986759c4d3f07ca7705 in 23ms, sequenceid=272, compaction requested=true 2023-05-29 13:01:19,079 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:19,079 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:19,079 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:01:19,080 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 179419 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:01:19,080 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:01:19,080 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:19,081 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0e87b9aea84241a4a7c6a0cc73eb9b1e, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/a9b6a412e8444b43a6206b49ebde9bf2, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/9cc8bbb80565478faa2865456f1d9c62] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=175.2 K 2023-05-29 13:01:19,081 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 0e87b9aea84241a4a7c6a0cc73eb9b1e, keycount=126, bloomtype=ROW, size=138.3 K, encoding=NONE, compression=NONE, seqNum=239, earliestPutTs=1685365198275 2023-05-29 13:01:19,081 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting a9b6a412e8444b43a6206b49ebde9bf2, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685365277012 2023-05-29 13:01:19,082 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 9cc8bbb80565478faa2865456f1d9c62, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=272, earliestPutTs=1685365279036 2023-05-29 13:01:19,092 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#51 average throughput is 77.99 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:01:19,106 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/87e17409e6064acba5127b36656e1379 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/87e17409e6064acba5127b36656e1379 2023-05-29 13:01:19,111 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into 87e17409e6064acba5127b36656e1379(size=165.8 K), total size for store is 165.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:01:19,111 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:19,111 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=13, startTime=1685365279079; duration=0sec 2023-05-29 13:01:19,111 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:29,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:29,087 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-29 13:01:29,096 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=287 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/6d6548e4bf9943298c487f5a9ca6cb93 2023-05-29 13:01:29,103 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/6d6548e4bf9943298c487f5a9ca6cb93 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/6d6548e4bf9943298c487f5a9ca6cb93 2023-05-29 13:01:29,107 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/6d6548e4bf9943298c487f5a9ca6cb93, entries=11, sequenceid=287, filesize=16.3 K 2023-05-29 13:01:29,108 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=0 B/0 for 39e4914594287986759c4d3f07ca7705 in 21ms, sequenceid=287, compaction requested=false 2023-05-29 13:01:29,108 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:31,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:31,095 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 13:01:31,105 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=297 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/d0842c41c43c4c268e757a47c5f17b7f 2023-05-29 13:01:31,111 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/d0842c41c43c4c268e757a47c5f17b7f as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d0842c41c43c4c268e757a47c5f17b7f 2023-05-29 13:01:31,116 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d0842c41c43c4c268e757a47c5f17b7f, entries=7, sequenceid=297, filesize=12.1 K 2023-05-29 13:01:31,119 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=14.71 KB/15064 for 39e4914594287986759c4d3f07ca7705 in 24ms, sequenceid=297, compaction requested=true 2023-05-29 13:01:31,119 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:31,119 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:31,119 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 13:01:31,121 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 198927 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 13:01:31,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33891] regionserver.HRegion(9158): Flush requested on 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:31,121 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1912): 39e4914594287986759c4d3f07ca7705/info is initiating minor compaction (all files) 2023-05-29 13:01:31,121 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 39e4914594287986759c4d3f07ca7705/info in TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:31,121 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=16.81 KB heapSize=18.25 KB 2023-05-29 13:01:31,121 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/87e17409e6064acba5127b36656e1379, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/6d6548e4bf9943298c487f5a9ca6cb93, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d0842c41c43c4c268e757a47c5f17b7f] into tmpdir=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp, totalSize=194.3 K 2023-05-29 13:01:31,122 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 87e17409e6064acba5127b36656e1379, keycount=152, bloomtype=ROW, size=165.8 K, encoding=NONE, compression=NONE, seqNum=272, earliestPutTs=1685365198275 2023-05-29 13:01:31,122 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting 6d6548e4bf9943298c487f5a9ca6cb93, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=287, earliestPutTs=1685365279057 2023-05-29 13:01:31,122 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] compactions.Compactor(207): Compacting d0842c41c43c4c268e757a47c5f17b7f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=297, earliestPutTs=1685365291088 2023-05-29 13:01:31,144 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.81 KB at sequenceid=316 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/62b6d285428944edbd891cf5b2e33d7e 2023-05-29 13:01:31,147 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] throttle.PressureAwareThroughputController(145): 39e4914594287986759c4d3f07ca7705#info#compaction#55 average throughput is 87.22 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 13:01:31,156 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/62b6d285428944edbd891cf5b2e33d7e as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/62b6d285428944edbd891cf5b2e33d7e 2023-05-29 13:01:31,162 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/62b6d285428944edbd891cf5b2e33d7e, entries=16, sequenceid=316, filesize=21.6 K 2023-05-29 13:01:31,163 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~16.81 KB/17216, heapSize ~18.23 KB/18672, currentSize=9.46 KB/9684 for 39e4914594287986759c4d3f07ca7705 in 42ms, sequenceid=316, compaction requested=false 2023-05-29 13:01:31,163 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:31,179 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/dc6cad11d2d04f8180dbc73efdbde3bd as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/dc6cad11d2d04f8180dbc73efdbde3bd 2023-05-29 13:01:31,184 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 39e4914594287986759c4d3f07ca7705/info of 39e4914594287986759c4d3f07ca7705 into dc6cad11d2d04f8180dbc73efdbde3bd(size=184.9 K), total size for store is 206.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 13:01:31,184 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:31,184 INFO [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., storeName=39e4914594287986759c4d3f07ca7705/info, priority=13, startTime=1685365291119; duration=0sec 2023-05-29 13:01:31,184 DEBUG [RS:0;jenkins-hbase4:33891-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 13:01:33,134 INFO [Listener at localhost/46451] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-29 13:01:33,151 INFO [Listener at localhost/46451] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365185506 with entries=311, filesize=307.49 KB; new WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365293135 2023-05-29 13:01:33,151 DEBUG [Listener at localhost/46451] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44399,DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b,DISK], DatanodeInfoWithStorage[127.0.0.1:45935,DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06,DISK]] 2023-05-29 13:01:33,151 DEBUG [Listener at localhost/46451] wal.AbstractFSWAL(716): hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365185506 is not closed yet, will try archiving it next time 2023-05-29 13:01:33,157 INFO [Listener at localhost/46451] regionserver.HRegion(2745): Flushing 39e4914594287986759c4d3f07ca7705 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-29 13:01:33,166 INFO [Listener at localhost/46451] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=329 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/e48a5007436c425fbc879d6ef94cf8bc 2023-05-29 13:01:33,171 DEBUG [Listener at localhost/46451] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/.tmp/info/e48a5007436c425fbc879d6ef94cf8bc as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/e48a5007436c425fbc879d6ef94cf8bc 2023-05-29 13:01:33,176 INFO [Listener at localhost/46451] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/e48a5007436c425fbc879d6ef94cf8bc, entries=9, sequenceid=329, filesize=14.2 K 2023-05-29 13:01:33,177 INFO [Listener at localhost/46451] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 39e4914594287986759c4d3f07ca7705 in 20ms, sequenceid=329, compaction requested=true 2023-05-29 13:01:33,177 DEBUG [Listener at localhost/46451] regionserver.HRegion(2446): Flush status journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:33,177 INFO [Listener at localhost/46451] regionserver.HRegion(2745): Flushing 3545d098cc2beda6407baa8daa20a51d 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 13:01:33,190 INFO [Listener at localhost/46451] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/.tmp/info/e11b282b071944afb9cfe3fddff942df 2023-05-29 13:01:33,195 DEBUG [Listener at localhost/46451] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/.tmp/info/e11b282b071944afb9cfe3fddff942df as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/info/e11b282b071944afb9cfe3fddff942df 2023-05-29 13:01:33,200 INFO [Listener at localhost/46451] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/info/e11b282b071944afb9cfe3fddff942df, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 13:01:33,201 INFO [Listener at localhost/46451] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 3545d098cc2beda6407baa8daa20a51d in 24ms, sequenceid=6, compaction requested=false 2023-05-29 13:01:33,201 DEBUG [Listener at localhost/46451] regionserver.HRegion(2446): Flush status journal for 3545d098cc2beda6407baa8daa20a51d: 2023-05-29 13:01:33,202 DEBUG [Listener at localhost/46451] regionserver.HRegion(2446): Flush status journal for 330c5ee3d16c7b817ba0762e5abf2144: 2023-05-29 13:01:33,202 INFO [Listener at localhost/46451] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-29 13:01:33,210 INFO [Listener at localhost/46451] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/.tmp/info/2532012d744543a88ba7071744f04a40 2023-05-29 13:01:33,215 DEBUG [Listener at localhost/46451] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/.tmp/info/2532012d744543a88ba7071744f04a40 as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info/2532012d744543a88ba7071744f04a40 2023-05-29 13:01:33,219 INFO [Listener at localhost/46451] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/info/2532012d744543a88ba7071744f04a40, entries=16, sequenceid=24, filesize=7.0 K 2023-05-29 13:01:33,220 INFO [Listener at localhost/46451] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 18ms, sequenceid=24, compaction requested=false 2023-05-29 13:01:33,220 DEBUG [Listener at localhost/46451] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-29 13:01:33,226 INFO [Listener at localhost/46451] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365293135 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365293220 2023-05-29 13:01:33,226 DEBUG [Listener at localhost/46451] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45935,DS-56b29cdc-7d5f-4b7b-93c1-fe823f045c06,DISK], DatanodeInfoWithStorage[127.0.0.1:44399,DS-5cca71e6-c3cf-449a-842e-b1549e71aa7b,DISK]] 2023-05-29 13:01:33,226 DEBUG [Listener at localhost/46451] wal.AbstractFSWAL(716): hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365293135 is not closed yet, will try archiving it next time 2023-05-29 13:01:33,226 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365185506 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/oldWALs/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365185506 2023-05-29 13:01:33,228 INFO [Listener at localhost/46451] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-29 13:01:33,230 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365293135 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/oldWALs/jenkins-hbase4.apache.org%2C33891%2C1685365185127.1685365293135 2023-05-29 13:01:33,328 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 13:01:33,329 INFO [Listener at localhost/46451] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 13:01:33,329 DEBUG [Listener at localhost/46451] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27afede1 to 127.0.0.1:57344 2023-05-29 13:01:33,329 DEBUG [Listener at localhost/46451] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:33,329 DEBUG [Listener at localhost/46451] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 13:01:33,329 DEBUG [Listener at localhost/46451] util.JVMClusterUtil(257): Found active master hash=1314191147, stopped=false 2023-05-29 13:01:33,329 INFO [Listener at localhost/46451] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 13:01:33,331 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 13:01:33,331 INFO [Listener at localhost/46451] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 13:01:33,331 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 13:01:33,331 DEBUG [Listener at localhost/46451] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a0af7b5 to 127.0.0.1:57344 2023-05-29 13:01:33,332 DEBUG [Listener at localhost/46451] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:33,331 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:33,332 INFO [Listener at localhost/46451] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33891,1685365185127' ***** 2023-05-29 13:01:33,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 13:01:33,332 INFO [Listener at localhost/46451] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 13:01:33,332 INFO [RS:0;jenkins-hbase4:33891] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 13:01:33,332 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 13:01:33,332 INFO [RS:0;jenkins-hbase4:33891] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 13:01:33,332 INFO [RS:0;jenkins-hbase4:33891] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 13:01:33,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 13:01:33,332 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(3303): Received CLOSE for 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(3303): Received CLOSE for 3545d098cc2beda6407baa8daa20a51d 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(3303): Received CLOSE for 330c5ee3d16c7b817ba0762e5abf2144 2023-05-29 13:01:33,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 39e4914594287986759c4d3f07ca7705, disabling compactions & flushes 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:01:33,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:33,333 DEBUG [RS:0;jenkins-hbase4:33891] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0fe3b1ca to 127.0.0.1:57344 2023-05-29 13:01:33,333 DEBUG [RS:0;jenkins-hbase4:33891] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:33,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 13:01:33,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. after waiting 0 ms 2023-05-29 13:01:33,333 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 13:01:33,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:33,334 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-29 13:01:33,334 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1478): Online Regions={39e4914594287986759c4d3f07ca7705=TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705., 3545d098cc2beda6407baa8daa20a51d=hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d., 330c5ee3d16c7b817ba0762e5abf2144=TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144., 1588230740=hbase:meta,,1.1588230740} 2023-05-29 13:01:33,334 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 13:01:33,334 DEBUG [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1504): Waiting on 1588230740, 330c5ee3d16c7b817ba0762e5abf2144, 3545d098cc2beda6407baa8daa20a51d, 39e4914594287986759c4d3f07ca7705 2023-05-29 13:01:33,335 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 13:01:33,338 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 13:01:33,340 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 13:01:33,342 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 13:01:33,355 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772->hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e-top, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/8f89f8bb7a354e03be2dc8f7e45f8f07, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/440ccff8fe2a4413a8bde1d55b7b413c, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0dd11053156f46a18a873d960297536c, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7105b1d181754d3f9dd913a055f32528, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/754f03ef6cfb4371823eada5012331fc, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d8f66e27848d4ee3a3933ffd5f8d5553, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7b73690b81a341e38d50942b091678ac, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0ed5cb619bb4410cba39dfd2953f7be1, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/010200849a354c3a8b2e472821237531, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/95a05a792a2b4c36ab1bee4ac74af646, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/f04a79677816453eab798df626b83eb2, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0e87b9aea84241a4a7c6a0cc73eb9b1e, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7aabf0cd15be4b8591dbcc775c0e78c6, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/a9b6a412e8444b43a6206b49ebde9bf2, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/87e17409e6064acba5127b36656e1379, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/9cc8bbb80565478faa2865456f1d9c62, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/6d6548e4bf9943298c487f5a9ca6cb93, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d0842c41c43c4c268e757a47c5f17b7f] to archive 2023-05-29 13:01:33,356 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 13:01:33,358 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:01:33,358 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-29 13:01:33,358 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 13:01:33,359 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 13:01:33,359 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 13:01:33,359 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 13:01:33,359 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/8f89f8bb7a354e03be2dc8f7e45f8f07 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/8f89f8bb7a354e03be2dc8f7e45f8f07 2023-05-29 13:01:33,361 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/TestLogRolling-testLogRolling=58724a1b1084121c6c9f35ff5a00f772-a2dc0df096eb4c2a9f3cff2bd55c370b 2023-05-29 13:01:33,362 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/440ccff8fe2a4413a8bde1d55b7b413c to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/440ccff8fe2a4413a8bde1d55b7b413c 2023-05-29 13:01:33,363 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0dd11053156f46a18a873d960297536c to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0dd11053156f46a18a873d960297536c 2023-05-29 13:01:33,364 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7105b1d181754d3f9dd913a055f32528 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7105b1d181754d3f9dd913a055f32528 2023-05-29 13:01:33,365 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/754f03ef6cfb4371823eada5012331fc to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/754f03ef6cfb4371823eada5012331fc 2023-05-29 13:01:33,366 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d8f66e27848d4ee3a3933ffd5f8d5553 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d8f66e27848d4ee3a3933ffd5f8d5553 2023-05-29 13:01:33,367 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7b73690b81a341e38d50942b091678ac to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7b73690b81a341e38d50942b091678ac 2023-05-29 13:01:33,368 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0ed5cb619bb4410cba39dfd2953f7be1 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0ed5cb619bb4410cba39dfd2953f7be1 2023-05-29 13:01:33,369 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/010200849a354c3a8b2e472821237531 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/010200849a354c3a8b2e472821237531 2023-05-29 13:01:33,370 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/95a05a792a2b4c36ab1bee4ac74af646 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/95a05a792a2b4c36ab1bee4ac74af646 2023-05-29 13:01:33,371 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/f04a79677816453eab798df626b83eb2 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/f04a79677816453eab798df626b83eb2 2023-05-29 13:01:33,372 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0e87b9aea84241a4a7c6a0cc73eb9b1e to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/0e87b9aea84241a4a7c6a0cc73eb9b1e 2023-05-29 13:01:33,373 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7aabf0cd15be4b8591dbcc775c0e78c6 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/7aabf0cd15be4b8591dbcc775c0e78c6 2023-05-29 13:01:33,374 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/a9b6a412e8444b43a6206b49ebde9bf2 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/a9b6a412e8444b43a6206b49ebde9bf2 2023-05-29 13:01:33,375 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/87e17409e6064acba5127b36656e1379 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/87e17409e6064acba5127b36656e1379 2023-05-29 13:01:33,376 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/9cc8bbb80565478faa2865456f1d9c62 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/9cc8bbb80565478faa2865456f1d9c62 2023-05-29 13:01:33,377 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/6d6548e4bf9943298c487f5a9ca6cb93 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/6d6548e4bf9943298c487f5a9ca6cb93 2023-05-29 13:01:33,378 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d0842c41c43c4c268e757a47c5f17b7f to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/info/d0842c41c43c4c268e757a47c5f17b7f 2023-05-29 13:01:33,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/39e4914594287986759c4d3f07ca7705/recovered.edits/332.seqid, newMaxSeqId=332, maxSeqId=106 2023-05-29 13:01:33,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:33,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 39e4914594287986759c4d3f07ca7705: 2023-05-29 13:01:33,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685365200346.39e4914594287986759c4d3f07ca7705. 2023-05-29 13:01:33,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3545d098cc2beda6407baa8daa20a51d, disabling compactions & flushes 2023-05-29 13:01:33,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 13:01:33,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 13:01:33,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. after waiting 0 ms 2023-05-29 13:01:33,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 13:01:33,386 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 13:01:33,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/hbase/namespace/3545d098cc2beda6407baa8daa20a51d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 13:01:33,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 13:01:33,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3545d098cc2beda6407baa8daa20a51d: 2023-05-29 13:01:33,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685365185659.3545d098cc2beda6407baa8daa20a51d. 2023-05-29 13:01:33,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 330c5ee3d16c7b817ba0762e5abf2144, disabling compactions & flushes 2023-05-29 13:01:33,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:01:33,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:01:33,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. after waiting 0 ms 2023-05-29 13:01:33,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:01:33,390 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772->hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/58724a1b1084121c6c9f35ff5a00f772/info/699bce38f4644773a9f04efdd542469e-bottom] to archive 2023-05-29 13:01:33,391 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 13:01:33,392 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772 to hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/archive/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/info/699bce38f4644773a9f04efdd542469e.58724a1b1084121c6c9f35ff5a00f772 2023-05-29 13:01:33,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/data/default/TestLogRolling-testLogRolling/330c5ee3d16c7b817ba0762e5abf2144/recovered.edits/111.seqid, newMaxSeqId=111, maxSeqId=106 2023-05-29 13:01:33,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:01:33,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 330c5ee3d16c7b817ba0762e5abf2144: 2023-05-29 13:01:33,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685365200346.330c5ee3d16c7b817ba0762e5abf2144. 2023-05-29 13:01:33,441 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-29 13:01:33,441 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-29 13:01:33,536 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33891,1685365185127; all regions closed. 2023-05-29 13:01:33,536 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:01:33,542 DEBUG [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/oldWALs 2023-05-29 13:01:33,542 INFO [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33891%2C1685365185127.meta:.meta(num 1685365185611) 2023-05-29 13:01:33,542 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/WALs/jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:01:33,547 DEBUG [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/oldWALs 2023-05-29 13:01:33,547 INFO [RS:0;jenkins-hbase4:33891] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33891%2C1685365185127:(num 1685365293220) 2023-05-29 13:01:33,547 DEBUG [RS:0;jenkins-hbase4:33891] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:33,547 INFO [RS:0;jenkins-hbase4:33891] regionserver.LeaseManager(133): Closed leases 2023-05-29 13:01:33,547 INFO [RS:0;jenkins-hbase4:33891] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 13:01:33,547 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 13:01:33,548 INFO [RS:0;jenkins-hbase4:33891] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33891 2023-05-29 13:01:33,551 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33891,1685365185127 2023-05-29 13:01:33,551 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 13:01:33,551 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 13:01:33,552 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33891,1685365185127] 2023-05-29 13:01:33,552 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33891,1685365185127; numProcessing=1 2023-05-29 13:01:33,553 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33891,1685365185127 already deleted, retry=false 2023-05-29 13:01:33,553 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33891,1685365185127 expired; onlineServers=0 2023-05-29 13:01:33,553 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,43673,1685365185089' ***** 2023-05-29 13:01:33,553 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 13:01:33,554 DEBUG [M:0;jenkins-hbase4:43673] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a6b7a39, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 13:01:33,554 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 13:01:33,554 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43673,1685365185089; all regions closed. 2023-05-29 13:01:33,554 DEBUG [M:0;jenkins-hbase4:43673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:33,554 DEBUG [M:0;jenkins-hbase4:43673] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 13:01:33,554 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 13:01:33,554 DEBUG [M:0;jenkins-hbase4:43673] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 13:01:33,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365185256] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365185256,5,FailOnTimeoutGroup] 2023-05-29 13:01:33,555 INFO [M:0;jenkins-hbase4:43673] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 13:01:33,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365185255] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365185255,5,FailOnTimeoutGroup] 2023-05-29 13:01:33,555 INFO [M:0;jenkins-hbase4:43673] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 13:01:33,555 INFO [M:0;jenkins-hbase4:43673] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 13:01:33,555 DEBUG [M:0;jenkins-hbase4:43673] master.HMaster(1512): Stopping service threads 2023-05-29 13:01:33,555 INFO [M:0;jenkins-hbase4:43673] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 13:01:33,556 ERROR [M:0;jenkins-hbase4:43673] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 13:01:33,556 INFO [M:0;jenkins-hbase4:43673] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 13:01:33,556 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 13:01:33,556 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 13:01:33,556 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:33,556 DEBUG [M:0;jenkins-hbase4:43673] zookeeper.ZKUtil(398): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 13:01:33,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 13:01:33,556 WARN [M:0;jenkins-hbase4:43673] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 13:01:33,556 INFO [M:0;jenkins-hbase4:43673] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 13:01:33,557 INFO [M:0;jenkins-hbase4:43673] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 13:01:33,557 DEBUG [M:0;jenkins-hbase4:43673] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 13:01:33,557 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:33,557 DEBUG [M:0;jenkins-hbase4:43673] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:33,557 DEBUG [M:0;jenkins-hbase4:43673] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 13:01:33,557 DEBUG [M:0;jenkins-hbase4:43673] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:33,557 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.71 KB heapSize=78.42 KB 2023-05-29 13:01:33,566 INFO [M:0;jenkins-hbase4:43673] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.71 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/eddadcb1f92f4f6e93131928300357bd 2023-05-29 13:01:33,571 INFO [M:0;jenkins-hbase4:43673] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eddadcb1f92f4f6e93131928300357bd 2023-05-29 13:01:33,573 DEBUG [M:0;jenkins-hbase4:43673] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/eddadcb1f92f4f6e93131928300357bd as hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/eddadcb1f92f4f6e93131928300357bd 2023-05-29 13:01:33,577 INFO [M:0;jenkins-hbase4:43673] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eddadcb1f92f4f6e93131928300357bd 2023-05-29 13:01:33,577 INFO [M:0;jenkins-hbase4:43673] regionserver.HStore(1080): Added hdfs://localhost:35585/user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/eddadcb1f92f4f6e93131928300357bd, entries=18, sequenceid=160, filesize=6.9 K 2023-05-29 13:01:33,578 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegion(2948): Finished flush of dataSize ~64.71 KB/66268, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=160, compaction requested=false 2023-05-29 13:01:33,579 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:33,579 DEBUG [M:0;jenkins-hbase4:43673] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 13:01:33,580 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1971cad9-8699-6107-b40b-1f536b591665/MasterData/WALs/jenkins-hbase4.apache.org,43673,1685365185089 2023-05-29 13:01:33,583 INFO [M:0;jenkins-hbase4:43673] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 13:01:33,583 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 13:01:33,584 INFO [M:0;jenkins-hbase4:43673] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43673 2023-05-29 13:01:33,587 DEBUG [M:0;jenkins-hbase4:43673] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43673,1685365185089 already deleted, retry=false 2023-05-29 13:01:33,652 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:33,652 INFO [RS:0;jenkins-hbase4:33891] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33891,1685365185127; zookeeper connection closed. 2023-05-29 13:01:33,652 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): regionserver:33891-0x10077065d330001, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:33,653 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@70e3a8c6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@70e3a8c6 2023-05-29 13:01:33,653 INFO [Listener at localhost/46451] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 13:01:33,752 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:33,752 DEBUG [Listener at localhost/46451-EventThread] zookeeper.ZKWatcher(600): master:43673-0x10077065d330000, quorum=127.0.0.1:57344, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:33,752 INFO [M:0;jenkins-hbase4:43673] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43673,1685365185089; zookeeper connection closed. 2023-05-29 13:01:33,754 WARN [Listener at localhost/46451] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 13:01:33,757 INFO [Listener at localhost/46451] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 13:01:33,865 WARN [BP-1645454783-172.31.14.131-1685365184555 heartbeating to localhost/127.0.0.1:35585] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 13:01:33,865 WARN [BP-1645454783-172.31.14.131-1685365184555 heartbeating to localhost/127.0.0.1:35585] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1645454783-172.31.14.131-1685365184555 (Datanode Uuid 8d45eb5c-6199-4404-ba88-5a951b933e91) service to localhost/127.0.0.1:35585 2023-05-29 13:01:33,865 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/dfs/data/data3/current/BP-1645454783-172.31.14.131-1685365184555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:33,866 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/dfs/data/data4/current/BP-1645454783-172.31.14.131-1685365184555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:33,870 WARN [Listener at localhost/46451] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 13:01:33,875 INFO [Listener at localhost/46451] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 13:01:33,937 WARN [BP-1645454783-172.31.14.131-1685365184555 heartbeating to localhost/127.0.0.1:35585] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1645454783-172.31.14.131-1685365184555 (Datanode Uuid d7d4900f-1bfd-4bdf-82d8-fb3c6098be77) service to localhost/127.0.0.1:35585 2023-05-29 13:01:33,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/dfs/data/data1/current/BP-1645454783-172.31.14.131-1685365184555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:33,938 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/cluster_5aba55ac-9a98-2cfc-a009-0935526c3880/dfs/data/data2/current/BP-1645454783-172.31.14.131-1685365184555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:33,993 INFO [Listener at localhost/46451] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 13:01:34,109 INFO [Listener at localhost/46451] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 13:01:34,140 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 13:01:34,151 INFO [Listener at localhost/46451] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 95) - Thread LEAK? -, OpenFileDescriptor=542 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=16 (was 47), ProcessCount=168 (was 168), AvailableMemoryMB=2612 (was 2919) 2023-05-29 13:01:34,159 INFO [Listener at localhost/46451] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=542, MaxFileDescriptor=60000, SystemLoadAverage=16, ProcessCount=168, AvailableMemoryMB=2612 2023-05-29 13:01:34,159 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 13:01:34,159 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/hadoop.log.dir so I do NOT create it in target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69 2023-05-29 13:01:34,159 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c734c3fd-90a3-18c0-661c-7a70ee1d8f54/hadoop.tmp.dir so I do NOT create it in target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69 2023-05-29 13:01:34,159 INFO [Listener at localhost/46451] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621, deleteOnExit=true 2023-05-29 13:01:34,159 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/test.cache.data in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/hadoop.log.dir in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 13:01:34,160 DEBUG [Listener at localhost/46451] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 13:01:34,160 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/nfs.dump.dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/java.io.tmpdir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 13:01:34,161 INFO [Listener at localhost/46451] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 13:01:34,163 WARN [Listener at localhost/46451] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 13:01:34,166 WARN [Listener at localhost/46451] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 13:01:34,166 WARN [Listener at localhost/46451] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 13:01:34,203 WARN [Listener at localhost/46451] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 13:01:34,204 INFO [Listener at localhost/46451] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 13:01:34,208 INFO [Listener at localhost/46451] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/java.io.tmpdir/Jetty_localhost_41069_hdfs____.squp6o/webapp 2023-05-29 13:01:34,298 INFO [Listener at localhost/46451] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41069 2023-05-29 13:01:34,300 WARN [Listener at localhost/46451] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 13:01:34,303 WARN [Listener at localhost/46451] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 13:01:34,303 WARN [Listener at localhost/46451] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 13:01:34,344 WARN [Listener at localhost/38789] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 13:01:34,376 WARN [Listener at localhost/38789] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 13:01:34,378 WARN [Listener at localhost/38789] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 13:01:34,379 INFO [Listener at localhost/38789] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 13:01:34,391 INFO [Listener at localhost/38789] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/java.io.tmpdir/Jetty_localhost_46451_datanode____fju31x/webapp 2023-05-29 13:01:34,490 INFO [Listener at localhost/38789] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46451 2023-05-29 13:01:34,496 WARN [Listener at localhost/43711] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 13:01:34,507 WARN [Listener at localhost/43711] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 13:01:34,509 WARN [Listener at localhost/43711] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 13:01:34,510 INFO [Listener at localhost/43711] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 13:01:34,513 INFO [Listener at localhost/43711] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/java.io.tmpdir/Jetty_localhost_34841_datanode____.nwyyqt/webapp 2023-05-29 13:01:34,588 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1274f06cf068a6fb: Processing first storage report for DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd from datanode 7aaf15d3-73db-4fdc-a8d9-e874b8263a35 2023-05-29 13:01:34,589 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1274f06cf068a6fb: from storage DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd node DatanodeRegistration(127.0.0.1:44633, datanodeUuid=7aaf15d3-73db-4fdc-a8d9-e874b8263a35, infoPort=35883, infoSecurePort=0, ipcPort=43711, storageInfo=lv=-57;cid=testClusterID;nsid=1181437260;c=1685365294168), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 13:01:34,589 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1274f06cf068a6fb: Processing first storage report for DS-a2ba48da-cd74-4fd1-a306-ed7ab7ace7ca from datanode 7aaf15d3-73db-4fdc-a8d9-e874b8263a35 2023-05-29 13:01:34,589 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1274f06cf068a6fb: from storage DS-a2ba48da-cd74-4fd1-a306-ed7ab7ace7ca node DatanodeRegistration(127.0.0.1:44633, datanodeUuid=7aaf15d3-73db-4fdc-a8d9-e874b8263a35, infoPort=35883, infoSecurePort=0, ipcPort=43711, storageInfo=lv=-57;cid=testClusterID;nsid=1181437260;c=1685365294168), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 13:01:34,617 INFO [Listener at localhost/43711] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34841 2023-05-29 13:01:34,623 WARN [Listener at localhost/37761] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 13:01:34,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a5e957bb33b98f9: Processing first storage report for DS-7a191631-c95a-42fe-88ce-6eb95249e048 from datanode 132b1593-a97b-45b3-b394-05a4200e1e1c 2023-05-29 13:01:34,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a5e957bb33b98f9: from storage DS-7a191631-c95a-42fe-88ce-6eb95249e048 node DatanodeRegistration(127.0.0.1:43149, datanodeUuid=132b1593-a97b-45b3-b394-05a4200e1e1c, infoPort=43695, infoSecurePort=0, ipcPort=37761, storageInfo=lv=-57;cid=testClusterID;nsid=1181437260;c=1685365294168), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 13:01:34,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a5e957bb33b98f9: Processing first storage report for DS-15879739-aa79-457c-abf9-07b9f201020f from datanode 132b1593-a97b-45b3-b394-05a4200e1e1c 2023-05-29 13:01:34,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a5e957bb33b98f9: from storage DS-15879739-aa79-457c-abf9-07b9f201020f node DatanodeRegistration(127.0.0.1:43149, datanodeUuid=132b1593-a97b-45b3-b394-05a4200e1e1c, infoPort=43695, infoSecurePort=0, ipcPort=37761, storageInfo=lv=-57;cid=testClusterID;nsid=1181437260;c=1685365294168), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 13:01:34,730 DEBUG [Listener at localhost/37761] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69 2023-05-29 13:01:34,732 INFO [Listener at localhost/37761] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/zookeeper_0, clientPort=62660, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 13:01:34,733 INFO [Listener at localhost/37761] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62660 2023-05-29 13:01:34,733 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,734 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,745 INFO [Listener at localhost/37761] util.FSUtils(471): Created version file at hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5 with version=8 2023-05-29 13:01:34,746 INFO [Listener at localhost/37761] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40317/user/jenkins/test-data/427cc1d2-324c-ff1e-71c7-cc41d4b2709d/hbase-staging 2023-05-29 13:01:34,747 INFO [Listener at localhost/37761] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 13:01:34,747 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 13:01:34,747 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 13:01:34,748 INFO [Listener at localhost/37761] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 13:01:34,748 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 13:01:34,748 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 13:01:34,748 INFO [Listener at localhost/37761] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 13:01:34,749 INFO [Listener at localhost/37761] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39579 2023-05-29 13:01:34,750 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,751 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,752 INFO [Listener at localhost/37761] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39579 connecting to ZooKeeper ensemble=127.0.0.1:62660 2023-05-29 13:01:34,760 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:395790x0, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 13:01:34,761 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39579-0x1007708098e0000 connected 2023-05-29 13:01:34,774 DEBUG [Listener at localhost/37761] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 13:01:34,774 DEBUG [Listener at localhost/37761] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 13:01:34,775 DEBUG [Listener at localhost/37761] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 13:01:34,780 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39579 2023-05-29 13:01:34,780 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39579 2023-05-29 13:01:34,780 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39579 2023-05-29 13:01:34,781 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39579 2023-05-29 13:01:34,781 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39579 2023-05-29 13:01:34,781 INFO [Listener at localhost/37761] master.HMaster(444): hbase.rootdir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5, hbase.cluster.distributed=false 2023-05-29 13:01:34,793 INFO [Listener at localhost/37761] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 13:01:34,794 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 13:01:34,794 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 13:01:34,794 INFO [Listener at localhost/37761] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 13:01:34,794 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 13:01:34,794 INFO [Listener at localhost/37761] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 13:01:34,794 INFO [Listener at localhost/37761] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 13:01:34,795 INFO [Listener at localhost/37761] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34447 2023-05-29 13:01:34,795 INFO [Listener at localhost/37761] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 13:01:34,796 DEBUG [Listener at localhost/37761] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 13:01:34,797 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,797 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,798 INFO [Listener at localhost/37761] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34447 connecting to ZooKeeper ensemble=127.0.0.1:62660 2023-05-29 13:01:34,801 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:344470x0, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 13:01:34,802 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34447-0x1007708098e0001 connected 2023-05-29 13:01:34,802 DEBUG [Listener at localhost/37761] zookeeper.ZKUtil(164): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 13:01:34,803 DEBUG [Listener at localhost/37761] zookeeper.ZKUtil(164): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 13:01:34,803 DEBUG [Listener at localhost/37761] zookeeper.ZKUtil(164): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 13:01:34,806 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34447 2023-05-29 13:01:34,807 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34447 2023-05-29 13:01:34,807 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34447 2023-05-29 13:01:34,807 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34447 2023-05-29 13:01:34,807 DEBUG [Listener at localhost/37761] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34447 2023-05-29 13:01:34,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,810 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 13:01:34,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,811 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 13:01:34,811 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 13:01:34,812 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:34,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 13:01:34,813 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 13:01:34,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39579,1685365294747 from backup master directory 2023-05-29 13:01:34,816 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,816 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 13:01:34,816 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 13:01:34,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/hbase.id with ID: 748801e6-cf67-469a-8dbb-84f06895a25a 2023-05-29 13:01:34,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:34,837 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:34,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1e138e9f to 127.0.0.1:62660 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 13:01:34,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50c9a0d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 13:01:34,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 13:01:34,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 13:01:34,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 13:01:34,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store-tmp 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 13:01:34,856 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:34,856 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 13:01:34,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/WALs/jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39579%2C1685365294747, suffix=, logDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/WALs/jenkins-hbase4.apache.org,39579,1685365294747, archiveDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/oldWALs, maxLogs=10 2023-05-29 13:01:34,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/WALs/jenkins-hbase4.apache.org,39579,1685365294747/jenkins-hbase4.apache.org%2C39579%2C1685365294747.1685365294860 2023-05-29 13:01:34,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43149,DS-7a191631-c95a-42fe-88ce-6eb95249e048,DISK], DatanodeInfoWithStorage[127.0.0.1:44633,DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd,DISK]] 2023-05-29 13:01:34,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 13:01:34,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:01:34,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 13:01:34,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 13:01:34,869 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 13:01:34,871 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 13:01:34,871 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 13:01:34,872 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:34,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 13:01:34,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 13:01:34,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 13:01:34,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 13:01:34,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=779639, jitterRate=-0.008638530969619751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 13:01:34,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 13:01:34,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 13:01:34,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 13:01:34,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 13:01:34,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 13:01:34,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 13:01:34,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 13:01:34,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 13:01:34,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 13:01:34,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 13:01:34,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 13:01:34,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 13:01:34,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 13:01:34,896 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 13:01:34,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 13:01:34,898 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:34,898 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 13:01:34,898 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 13:01:34,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 13:01:34,901 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 13:01:34,901 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 13:01:34,901 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:34,903 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39579,1685365294747, sessionid=0x1007708098e0000, setting cluster-up flag (Was=false) 2023-05-29 13:01:34,906 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:34,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 13:01:34,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,913 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:34,921 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 13:01:34,921 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:34,922 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/.hbase-snapshot/.tmp 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:34,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 13:01:34,925 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:34,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685365324926 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 13:01:34,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:34,927 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 13:01:34,928 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 13:01:34,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 13:01:34,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 13:01:34,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 13:01:34,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 13:01:34,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 13:01:34,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365294929,5,FailOnTimeoutGroup] 2023-05-29 13:01:34,929 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 13:01:34,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365294929,5,FailOnTimeoutGroup] 2023-05-29 13:01:34,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:34,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 13:01:34,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:34,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:34,940 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 13:01:34,940 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 13:01:34,940 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5 2023-05-29 13:01:34,947 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:01:34,948 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 13:01:34,949 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/info 2023-05-29 13:01:34,949 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 13:01:34,950 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:34,950 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 13:01:34,951 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/rep_barrier 2023-05-29 13:01:34,951 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 13:01:34,952 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:34,952 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 13:01:34,953 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/table 2023-05-29 13:01:34,953 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 13:01:34,953 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:34,954 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740 2023-05-29 13:01:34,954 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740 2023-05-29 13:01:34,956 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 13:01:34,956 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 13:01:34,958 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 13:01:34,958 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=882319, jitterRate=0.1219266802072525}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 13:01:34,958 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 13:01:34,958 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 13:01:34,958 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 13:01:34,958 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 13:01:34,958 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 13:01:34,958 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 13:01:34,959 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 13:01:34,959 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 13:01:34,959 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 13:01:34,959 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 13:01:34,960 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 13:01:34,961 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 13:01:34,962 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 13:01:35,009 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(951): ClusterId : 748801e6-cf67-469a-8dbb-84f06895a25a 2023-05-29 13:01:35,010 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 13:01:35,012 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 13:01:35,012 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 13:01:35,015 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 13:01:35,016 DEBUG [RS:0;jenkins-hbase4:34447] zookeeper.ReadOnlyZKClient(139): Connect 0x2f1d02ae to 127.0.0.1:62660 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 13:01:35,019 DEBUG [RS:0;jenkins-hbase4:34447] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21daf93d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 13:01:35,019 DEBUG [RS:0;jenkins-hbase4:34447] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25e569de, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 13:01:35,029 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34447 2023-05-29 13:01:35,029 INFO [RS:0;jenkins-hbase4:34447] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 13:01:35,029 INFO [RS:0;jenkins-hbase4:34447] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 13:01:35,029 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 13:01:35,029 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,39579,1685365294747 with isa=jenkins-hbase4.apache.org/172.31.14.131:34447, startcode=1685365294793 2023-05-29 13:01:35,030 DEBUG [RS:0;jenkins-hbase4:34447] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 13:01:35,033 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32969, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 13:01:35,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39579] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,034 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5 2023-05-29 13:01:35,035 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38789 2023-05-29 13:01:35,035 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 13:01:35,036 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 13:01:35,036 DEBUG [RS:0;jenkins-hbase4:34447] zookeeper.ZKUtil(162): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,036 WARN [RS:0;jenkins-hbase4:34447] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 13:01:35,037 INFO [RS:0;jenkins-hbase4:34447] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 13:01:35,037 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1946): logDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,037 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34447,1685365294793] 2023-05-29 13:01:35,040 DEBUG [RS:0;jenkins-hbase4:34447] zookeeper.ZKUtil(162): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,041 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 13:01:35,041 INFO [RS:0;jenkins-hbase4:34447] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 13:01:35,042 INFO [RS:0;jenkins-hbase4:34447] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 13:01:35,042 INFO [RS:0;jenkins-hbase4:34447] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 13:01:35,042 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,043 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 13:01:35,043 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,044 DEBUG [RS:0;jenkins-hbase4:34447] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 13:01:35,045 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,045 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,045 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,057 INFO [RS:0;jenkins-hbase4:34447] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 13:01:35,057 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34447,1685365294793-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,067 INFO [RS:0;jenkins-hbase4:34447] regionserver.Replication(203): jenkins-hbase4.apache.org,34447,1685365294793 started 2023-05-29 13:01:35,067 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34447,1685365294793, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34447, sessionid=0x1007708098e0001 2023-05-29 13:01:35,067 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 13:01:35,067 DEBUG [RS:0;jenkins-hbase4:34447] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,067 DEBUG [RS:0;jenkins-hbase4:34447] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34447,1685365294793' 2023-05-29 13:01:35,067 DEBUG [RS:0;jenkins-hbase4:34447] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 13:01:35,067 DEBUG [RS:0;jenkins-hbase4:34447] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 13:01:35,067 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 13:01:35,068 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 13:01:35,068 DEBUG [RS:0;jenkins-hbase4:34447] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,068 DEBUG [RS:0;jenkins-hbase4:34447] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34447,1685365294793' 2023-05-29 13:01:35,068 DEBUG [RS:0;jenkins-hbase4:34447] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 13:01:35,068 DEBUG [RS:0;jenkins-hbase4:34447] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 13:01:35,068 DEBUG [RS:0;jenkins-hbase4:34447] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 13:01:35,068 INFO [RS:0;jenkins-hbase4:34447] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 13:01:35,068 INFO [RS:0;jenkins-hbase4:34447] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 13:01:35,112 DEBUG [jenkins-hbase4:39579] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 13:01:35,113 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34447,1685365294793, state=OPENING 2023-05-29 13:01:35,114 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 13:01:35,116 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:35,116 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34447,1685365294793}] 2023-05-29 13:01:35,116 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 13:01:35,170 INFO [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34447%2C1685365294793, suffix=, logDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793, archiveDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs, maxLogs=32 2023-05-29 13:01:35,176 INFO [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793/jenkins-hbase4.apache.org%2C34447%2C1685365294793.1685365295170 2023-05-29 13:01:35,177 DEBUG [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44633,DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd,DISK], DatanodeInfoWithStorage[127.0.0.1:43149,DS-7a191631-c95a-42fe-88ce-6eb95249e048,DISK]] 2023-05-29 13:01:35,270 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,270 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 13:01:35,272 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 13:01:35,275 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 13:01:35,275 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 13:01:35,276 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34447%2C1685365294793.meta, suffix=.meta, logDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793, archiveDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs, maxLogs=32 2023-05-29 13:01:35,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793/jenkins-hbase4.apache.org%2C34447%2C1685365294793.meta.1685365295277.meta 2023-05-29 13:01:35,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44633,DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd,DISK], DatanodeInfoWithStorage[127.0.0.1:43149,DS-7a191631-c95a-42fe-88ce-6eb95249e048,DISK]] 2023-05-29 13:01:35,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 13:01:35,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 13:01:35,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 13:01:35,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 13:01:35,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 13:01:35,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:01:35,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 13:01:35,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 13:01:35,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 13:01:35,284 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/info 2023-05-29 13:01:35,284 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/info 2023-05-29 13:01:35,285 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 13:01:35,285 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:35,285 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 13:01:35,286 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/rep_barrier 2023-05-29 13:01:35,286 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/rep_barrier 2023-05-29 13:01:35,286 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 13:01:35,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:35,287 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 13:01:35,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/table 2023-05-29 13:01:35,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/table 2023-05-29 13:01:35,287 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 13:01:35,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:35,288 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740 2023-05-29 13:01:35,289 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740 2023-05-29 13:01:35,291 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 13:01:35,292 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 13:01:35,293 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=881401, jitterRate=0.12075936794281006}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 13:01:35,293 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 13:01:35,296 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685365295270 2023-05-29 13:01:35,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 13:01:35,300 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 13:01:35,301 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34447,1685365294793, state=OPEN 2023-05-29 13:01:35,303 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 13:01:35,304 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 13:01:35,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 13:01:35,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34447,1685365294793 in 188 msec 2023-05-29 13:01:35,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 13:01:35,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 347 msec 2023-05-29 13:01:35,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 385 msec 2023-05-29 13:01:35,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685365295310, completionTime=-1 2023-05-29 13:01:35,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 13:01:35,310 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 13:01:35,313 DEBUG [hconnection-0x78d56785-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 13:01:35,315 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 13:01:35,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 13:01:35,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685365355317 2023-05-29 13:01:35,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685365415317 2023-05-29 13:01:35,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39579,1685365294747-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39579,1685365294747-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39579,1685365294747-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39579, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 13:01:35,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 13:01:35,325 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 13:01:35,325 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 13:01:35,327 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 13:01:35,328 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 13:01:35,329 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/.tmp/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,330 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/.tmp/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3 empty. 2023-05-29 13:01:35,330 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/.tmp/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,330 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 13:01:35,341 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 13:01:35,343 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ea061d44ee087d78937c04698878f4b3, NAME => 'hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/.tmp 2023-05-29 13:01:35,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:01:35,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ea061d44ee087d78937c04698878f4b3, disabling compactions & flushes 2023-05-29 13:01:35,349 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. after waiting 0 ms 2023-05-29 13:01:35,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,350 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ea061d44ee087d78937c04698878f4b3: 2023-05-29 13:01:35,352 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 13:01:35,353 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365295353"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685365295353"}]},"ts":"1685365295353"} 2023-05-29 13:01:35,355 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 13:01:35,356 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 13:01:35,356 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365295356"}]},"ts":"1685365295356"} 2023-05-29 13:01:35,357 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 13:01:35,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ea061d44ee087d78937c04698878f4b3, ASSIGN}] 2023-05-29 13:01:35,367 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ea061d44ee087d78937c04698878f4b3, ASSIGN 2023-05-29 13:01:35,368 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ea061d44ee087d78937c04698878f4b3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34447,1685365294793; forceNewPlan=false, retain=false 2023-05-29 13:01:35,519 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ea061d44ee087d78937c04698878f4b3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,519 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365295519"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685365295519"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685365295519"}]},"ts":"1685365295519"} 2023-05-29 13:01:35,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure ea061d44ee087d78937c04698878f4b3, server=jenkins-hbase4.apache.org,34447,1685365294793}] 2023-05-29 13:01:35,676 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ea061d44ee087d78937c04698878f4b3, NAME => 'hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.', STARTKEY => '', ENDKEY => ''} 2023-05-29 13:01:35,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 13:01:35,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,677 INFO [StoreOpener-ea061d44ee087d78937c04698878f4b3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,679 DEBUG [StoreOpener-ea061d44ee087d78937c04698878f4b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/info 2023-05-29 13:01:35,679 DEBUG [StoreOpener-ea061d44ee087d78937c04698878f4b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/info 2023-05-29 13:01:35,679 INFO [StoreOpener-ea061d44ee087d78937c04698878f4b3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ea061d44ee087d78937c04698878f4b3 columnFamilyName info 2023-05-29 13:01:35,680 INFO [StoreOpener-ea061d44ee087d78937c04698878f4b3-1] regionserver.HStore(310): Store=ea061d44ee087d78937c04698878f4b3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 13:01:35,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,681 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:35,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 13:01:35,685 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ea061d44ee087d78937c04698878f4b3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=757472, jitterRate=-0.03682549297809601}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 13:01:35,686 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ea061d44ee087d78937c04698878f4b3: 2023-05-29 13:01:35,688 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3., pid=6, masterSystemTime=1685365295672 2023-05-29 13:01:35,690 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,690 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:35,690 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ea061d44ee087d78937c04698878f4b3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:35,690 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685365295690"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685365295690"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685365295690"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685365295690"}]},"ts":"1685365295690"} 2023-05-29 13:01:35,694 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 13:01:35,694 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure ea061d44ee087d78937c04698878f4b3, server=jenkins-hbase4.apache.org,34447,1685365294793 in 171 msec 2023-05-29 13:01:35,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 13:01:35,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ea061d44ee087d78937c04698878f4b3, ASSIGN in 330 msec 2023-05-29 13:01:35,696 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 13:01:35,696 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685365295696"}]},"ts":"1685365295696"} 2023-05-29 13:01:35,697 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 13:01:35,700 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 13:01:35,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 376 msec 2023-05-29 13:01:35,726 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 13:01:35,729 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 13:01:35,729 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:35,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 13:01:35,739 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 13:01:35,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-05-29 13:01:35,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 13:01:35,749 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 13:01:35,754 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-29 13:01:35,770 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 13:01:35,773 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 13:01:35,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.957sec 2023-05-29 13:01:35,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 13:01:35,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 13:01:35,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 13:01:35,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39579,1685365294747-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 13:01:35,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39579,1685365294747-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 13:01:35,776 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 13:01:35,810 DEBUG [Listener at localhost/37761] zookeeper.ReadOnlyZKClient(139): Connect 0x39223e45 to 127.0.0.1:62660 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 13:01:35,814 DEBUG [Listener at localhost/37761] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e5fd176, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 13:01:35,815 DEBUG [hconnection-0x200e66d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 13:01:35,817 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42194, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 13:01:35,818 INFO [Listener at localhost/37761] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:35,818 INFO [Listener at localhost/37761] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 13:01:35,821 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 13:01:35,821 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:35,822 INFO [Listener at localhost/37761] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 13:01:35,822 INFO [Listener at localhost/37761] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 13:01:35,824 INFO [Listener at localhost/37761] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1, archiveDir=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs, maxLogs=32 2023-05-29 13:01:35,828 INFO [Listener at localhost/37761] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685365295824 2023-05-29 13:01:35,829 DEBUG [Listener at localhost/37761] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44633,DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd,DISK], DatanodeInfoWithStorage[127.0.0.1:43149,DS-7a191631-c95a-42fe-88ce-6eb95249e048,DISK]] 2023-05-29 13:01:35,834 INFO [Listener at localhost/37761] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685365295824 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685365295829 2023-05-29 13:01:35,834 DEBUG [Listener at localhost/37761] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43149,DS-7a191631-c95a-42fe-88ce-6eb95249e048,DISK], DatanodeInfoWithStorage[127.0.0.1:44633,DS-3ef2dafa-06d2-41ae-9782-e13e2e5860cd,DISK]] 2023-05-29 13:01:35,834 DEBUG [Listener at localhost/37761] wal.AbstractFSWAL(716): hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685365295824 is not closed yet, will try archiving it next time 2023-05-29 13:01:35,835 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1 2023-05-29 13:01:35,841 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685365295824 to hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs/test.com%2C8080%2C1.1685365295824 2023-05-29 13:01:36,244 DEBUG [Listener at localhost/37761] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs 2023-05-29 13:01:36,244 INFO [Listener at localhost/37761] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685365295829) 2023-05-29 13:01:36,244 INFO [Listener at localhost/37761] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 13:01:36,244 DEBUG [Listener at localhost/37761] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39223e45 to 127.0.0.1:62660 2023-05-29 13:01:36,244 DEBUG [Listener at localhost/37761] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:36,246 DEBUG [Listener at localhost/37761] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 13:01:36,246 DEBUG [Listener at localhost/37761] util.JVMClusterUtil(257): Found active master hash=1112832662, stopped=false 2023-05-29 13:01:36,246 INFO [Listener at localhost/37761] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:36,249 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 13:01:36,249 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 13:01:36,250 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:36,249 INFO [Listener at localhost/37761] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 13:01:36,250 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 13:01:36,251 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 13:01:36,251 DEBUG [Listener at localhost/37761] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1e138e9f to 127.0.0.1:62660 2023-05-29 13:01:36,251 DEBUG [Listener at localhost/37761] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:36,251 INFO [Listener at localhost/37761] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34447,1685365294793' ***** 2023-05-29 13:01:36,251 INFO [Listener at localhost/37761] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 13:01:36,251 INFO [RS:0;jenkins-hbase4:34447] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 13:01:36,251 INFO [RS:0;jenkins-hbase4:34447] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 13:01:36,251 INFO [RS:0;jenkins-hbase4:34447] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 13:01:36,252 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 13:01:36,252 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(3303): Received CLOSE for ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:36,252 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:36,252 DEBUG [RS:0;jenkins-hbase4:34447] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f1d02ae to 127.0.0.1:62660 2023-05-29 13:01:36,253 DEBUG [RS:0;jenkins-hbase4:34447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ea061d44ee087d78937c04698878f4b3, disabling compactions & flushes 2023-05-29 13:01:36,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:36,253 INFO [RS:0;jenkins-hbase4:34447] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 13:01:36,253 INFO [RS:0;jenkins-hbase4:34447] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 13:01:36,253 INFO [RS:0;jenkins-hbase4:34447] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 13:01:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:36,253 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 13:01:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. after waiting 0 ms 2023-05-29 13:01:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:36,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ea061d44ee087d78937c04698878f4b3 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 13:01:36,254 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-29 13:01:36,254 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, ea061d44ee087d78937c04698878f4b3=hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3.} 2023-05-29 13:01:36,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 13:01:36,254 DEBUG [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1504): Waiting on 1588230740, ea061d44ee087d78937c04698878f4b3 2023-05-29 13:01:36,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 13:01:36,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 13:01:36,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 13:01:36,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 13:01:36,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-29 13:01:36,267 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/.tmp/info/3ff95a4a9ebe4b63ad4258568dd71fe9 2023-05-29 13:01:36,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/.tmp/info/67cc6fab6ac84149885ccfea2242e1de 2023-05-29 13:01:36,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/.tmp/info/67cc6fab6ac84149885ccfea2242e1de as hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/info/67cc6fab6ac84149885ccfea2242e1de 2023-05-29 13:01:36,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/info/67cc6fab6ac84149885ccfea2242e1de, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 13:01:36,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for ea061d44ee087d78937c04698878f4b3 in 28ms, sequenceid=6, compaction requested=false 2023-05-29 13:01:36,283 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/.tmp/table/15e67bba31b841e6a3828e4f6046561f 2023-05-29 13:01:36,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/namespace/ea061d44ee087d78937c04698878f4b3/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 13:01:36,286 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:36,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ea061d44ee087d78937c04698878f4b3: 2023-05-29 13:01:36,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685365295324.ea061d44ee087d78937c04698878f4b3. 2023-05-29 13:01:36,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/.tmp/info/3ff95a4a9ebe4b63ad4258568dd71fe9 as hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/info/3ff95a4a9ebe4b63ad4258568dd71fe9 2023-05-29 13:01:36,291 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/info/3ff95a4a9ebe4b63ad4258568dd71fe9, entries=10, sequenceid=9, filesize=5.9 K 2023-05-29 13:01:36,292 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/.tmp/table/15e67bba31b841e6a3828e4f6046561f as hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/table/15e67bba31b841e6a3828e4f6046561f 2023-05-29 13:01:36,297 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/table/15e67bba31b841e6a3828e4f6046561f, entries=2, sequenceid=9, filesize=4.7 K 2023-05-29 13:01:36,298 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 44ms, sequenceid=9, compaction requested=false 2023-05-29 13:01:36,298 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 13:01:36,305 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-29 13:01:36,305 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 13:01:36,305 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 13:01:36,305 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 13:01:36,305 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 13:01:36,454 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34447,1685365294793; all regions closed. 2023-05-29 13:01:36,455 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:36,459 DEBUG [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs 2023-05-29 13:01:36,459 INFO [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C34447%2C1685365294793.meta:.meta(num 1685365295277) 2023-05-29 13:01:36,460 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/WALs/jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:36,464 DEBUG [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/oldWALs 2023-05-29 13:01:36,464 INFO [RS:0;jenkins-hbase4:34447] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C34447%2C1685365294793:(num 1685365295170) 2023-05-29 13:01:36,464 DEBUG [RS:0;jenkins-hbase4:34447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:36,464 INFO [RS:0;jenkins-hbase4:34447] regionserver.LeaseManager(133): Closed leases 2023-05-29 13:01:36,464 INFO [RS:0;jenkins-hbase4:34447] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 13:01:36,464 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 13:01:36,465 INFO [RS:0;jenkins-hbase4:34447] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34447 2023-05-29 13:01:36,470 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34447,1685365294793 2023-05-29 13:01:36,470 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 13:01:36,470 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 13:01:36,471 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34447,1685365294793] 2023-05-29 13:01:36,471 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34447,1685365294793; numProcessing=1 2023-05-29 13:01:36,472 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34447,1685365294793 already deleted, retry=false 2023-05-29 13:01:36,472 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34447,1685365294793 expired; onlineServers=0 2023-05-29 13:01:36,472 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,39579,1685365294747' ***** 2023-05-29 13:01:36,472 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 13:01:36,472 DEBUG [M:0;jenkins-hbase4:39579] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@466bf604, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 13:01:36,472 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:36,472 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39579,1685365294747; all regions closed. 2023-05-29 13:01:36,472 DEBUG [M:0;jenkins-hbase4:39579] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 13:01:36,473 DEBUG [M:0;jenkins-hbase4:39579] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 13:01:36,473 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 13:01:36,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365294929] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685365294929,5,FailOnTimeoutGroup] 2023-05-29 13:01:36,473 DEBUG [M:0;jenkins-hbase4:39579] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 13:01:36,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365294929] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685365294929,5,FailOnTimeoutGroup] 2023-05-29 13:01:36,473 INFO [M:0;jenkins-hbase4:39579] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 13:01:36,474 INFO [M:0;jenkins-hbase4:39579] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 13:01:36,474 INFO [M:0;jenkins-hbase4:39579] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 13:01:36,474 DEBUG [M:0;jenkins-hbase4:39579] master.HMaster(1512): Stopping service threads 2023-05-29 13:01:36,474 INFO [M:0;jenkins-hbase4:39579] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 13:01:36,475 ERROR [M:0;jenkins-hbase4:39579] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-29 13:01:36,475 INFO [M:0;jenkins-hbase4:39579] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 13:01:36,475 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 13:01:36,475 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 13:01:36,475 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 13:01:36,475 DEBUG [M:0;jenkins-hbase4:39579] zookeeper.ZKUtil(398): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 13:01:36,475 WARN [M:0;jenkins-hbase4:39579] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 13:01:36,475 INFO [M:0;jenkins-hbase4:39579] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 13:01:36,476 INFO [M:0;jenkins-hbase4:39579] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 13:01:36,476 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 13:01:36,476 DEBUG [M:0;jenkins-hbase4:39579] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 13:01:36,476 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:36,476 DEBUG [M:0;jenkins-hbase4:39579] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:36,476 DEBUG [M:0;jenkins-hbase4:39579] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 13:01:36,476 DEBUG [M:0;jenkins-hbase4:39579] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:36,476 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-05-29 13:01:36,485 INFO [M:0;jenkins-hbase4:39579] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/05d695ff60b747e886996ca600f74593 2023-05-29 13:01:36,490 DEBUG [M:0;jenkins-hbase4:39579] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/05d695ff60b747e886996ca600f74593 as hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/05d695ff60b747e886996ca600f74593 2023-05-29 13:01:36,495 INFO [M:0;jenkins-hbase4:39579] regionserver.HStore(1080): Added hdfs://localhost:38789/user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/05d695ff60b747e886996ca600f74593, entries=8, sequenceid=66, filesize=6.3 K 2023-05-29 13:01:36,496 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=66, compaction requested=false 2023-05-29 13:01:36,497 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 13:01:36,497 DEBUG [M:0;jenkins-hbase4:39579] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 13:01:36,497 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb36916f-25cd-15d9-879e-2c96d8dbded5/MasterData/WALs/jenkins-hbase4.apache.org,39579,1685365294747 2023-05-29 13:01:36,500 INFO [M:0;jenkins-hbase4:39579] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 13:01:36,500 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 13:01:36,501 INFO [M:0;jenkins-hbase4:39579] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39579 2023-05-29 13:01:36,505 DEBUG [M:0;jenkins-hbase4:39579] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39579,1685365294747 already deleted, retry=false 2023-05-29 13:01:36,650 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:36,650 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): master:39579-0x1007708098e0000, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:36,650 INFO [M:0;jenkins-hbase4:39579] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39579,1685365294747; zookeeper connection closed. 2023-05-29 13:01:36,750 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:36,750 DEBUG [Listener at localhost/37761-EventThread] zookeeper.ZKWatcher(600): regionserver:34447-0x1007708098e0001, quorum=127.0.0.1:62660, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 13:01:36,750 INFO [RS:0;jenkins-hbase4:34447] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34447,1685365294793; zookeeper connection closed. 2023-05-29 13:01:36,751 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@37cabe05] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@37cabe05 2023-05-29 13:01:36,751 INFO [Listener at localhost/37761] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 13:01:36,751 WARN [Listener at localhost/37761] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 13:01:36,755 INFO [Listener at localhost/37761] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 13:01:36,858 WARN [BP-355490336-172.31.14.131-1685365294168 heartbeating to localhost/127.0.0.1:38789] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 13:01:36,859 WARN [BP-355490336-172.31.14.131-1685365294168 heartbeating to localhost/127.0.0.1:38789] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-355490336-172.31.14.131-1685365294168 (Datanode Uuid 132b1593-a97b-45b3-b394-05a4200e1e1c) service to localhost/127.0.0.1:38789 2023-05-29 13:01:36,860 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/dfs/data/data3/current/BP-355490336-172.31.14.131-1685365294168] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:36,860 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/dfs/data/data4/current/BP-355490336-172.31.14.131-1685365294168] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:36,861 WARN [Listener at localhost/37761] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 13:01:36,864 INFO [Listener at localhost/37761] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 13:01:36,967 WARN [BP-355490336-172.31.14.131-1685365294168 heartbeating to localhost/127.0.0.1:38789] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 13:01:36,967 WARN [BP-355490336-172.31.14.131-1685365294168 heartbeating to localhost/127.0.0.1:38789] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-355490336-172.31.14.131-1685365294168 (Datanode Uuid 7aaf15d3-73db-4fdc-a8d9-e874b8263a35) service to localhost/127.0.0.1:38789 2023-05-29 13:01:36,968 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/dfs/data/data1/current/BP-355490336-172.31.14.131-1685365294168] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:36,968 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/67771cb6-82ce-f06a-85d1-047afb5d6f69/cluster_d24e8b81-2d5a-77f9-c368-2be298582621/dfs/data/data2/current/BP-355490336-172.31.14.131-1685365294168] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 13:01:36,977 INFO [Listener at localhost/37761] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 13:01:37,047 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 13:01:37,087 INFO [Listener at localhost/37761] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 13:01:37,097 INFO [Listener at localhost/37761] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 13:01:37,109 INFO [Listener at localhost/37761] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=129 (was 107) - Thread LEAK? -, OpenFileDescriptor=566 (was 542) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=22 (was 16) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 168), AvailableMemoryMB=2660 (was 2612) - AvailableMemoryMB LEAK? -