2023-02-14 21:01:43,193 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6 2023-02-14 21:01:43,204 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.client.TestAsyncClusterAdminApi2 timeout: 13 mins 2023-02-14 21:01:43,234 INFO [Time-limited test] hbase.ResourceChecker(147): before: client.TestAsyncClusterAdminApi2#testStop Thread=8, OpenFileDescriptor=260, MaxFileDescriptor=60000, SystemLoadAverage=288, ProcessCount=170, AvailableMemoryMB=5684 2023-02-14 21:01:43,239 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-02-14 21:01:43,239 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490, deleteOnExit=true 2023-02-14 21:01:43,240 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-02-14 21:01:43,240 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/test.cache.data in system properties and HBase conf 2023-02-14 21:01:43,241 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/hadoop.tmp.dir in system properties and HBase conf 2023-02-14 21:01:43,241 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/hadoop.log.dir in system properties and HBase conf 2023-02-14 21:01:43,241 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-02-14 21:01:43,242 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-02-14 21:01:43,242 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-02-14 21:01:43,352 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-02-14 21:01:43,738 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-02-14 21:01:43,742 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-02-14 21:01:43,742 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-02-14 21:01:43,743 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-02-14 21:01:43,743 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-14 21:01:43,743 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-02-14 21:01:43,743 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-02-14 21:01:43,744 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-14 21:01:43,744 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-14 21:01:43,745 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-02-14 21:01:43,745 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/nfs.dump.dir in system properties and HBase conf 2023-02-14 21:01:43,745 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/java.io.tmpdir in system properties and HBase conf 2023-02-14 21:01:43,746 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-14 21:01:43,746 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-02-14 21:01:43,746 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-02-14 21:01:44,185 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-14 21:01:44,188 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-14 21:01:44,869 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-02-14 21:01:45,008 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-02-14 21:01:45,022 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:45,055 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:45,084 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/java.io.tmpdir/Jetty_localhost_localdomain_41805_hdfs____g8lxmi/webapp 2023-02-14 21:01:45,214 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:41805 2023-02-14 21:01:45,222 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-14 21:01:45,222 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-14 21:01:45,735 WARN [Listener at localhost.localdomain/40959] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:45,838 WARN [Listener at localhost.localdomain/40959] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-14 21:01:45,854 WARN [Listener at localhost.localdomain/40959] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:45,859 INFO [Listener at localhost.localdomain/40959] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:45,863 INFO [Listener at localhost.localdomain/40959] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/java.io.tmpdir/Jetty_localhost_46221_datanode____.98af6y/webapp 2023-02-14 21:01:45,937 INFO [Listener at localhost.localdomain/40959] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46221 2023-02-14 21:01:46,199 WARN [Listener at localhost.localdomain/43835] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:46,207 WARN [Listener at localhost.localdomain/43835] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-14 21:01:46,211 WARN [Listener at localhost.localdomain/43835] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:46,213 INFO [Listener at localhost.localdomain/43835] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:46,218 INFO [Listener at localhost.localdomain/43835] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/java.io.tmpdir/Jetty_localhost_45633_datanode____9uf598/webapp 2023-02-14 21:01:46,294 INFO [Listener at localhost.localdomain/43835] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45633 2023-02-14 21:01:46,304 WARN [Listener at localhost.localdomain/37751] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:46,316 WARN [Listener at localhost.localdomain/37751] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-14 21:01:46,320 WARN [Listener at localhost.localdomain/37751] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:46,322 INFO [Listener at localhost.localdomain/37751] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:46,328 INFO [Listener at localhost.localdomain/37751] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/java.io.tmpdir/Jetty_localhost_38981_datanode____4tvkr8/webapp 2023-02-14 21:01:46,420 INFO [Listener at localhost.localdomain/37751] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38981 2023-02-14 21:01:46,428 WARN [Listener at localhost.localdomain/38639] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:48,002 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x812fc67e108bddb7: Processing first storage report for DS-67143e06-ceaa-4030-ba21-80826ba16615 from datanode dbba8f00-6bd9-4753-bf9d-fb1cb1ebab05 2023-02-14 21:01:48,003 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x812fc67e108bddb7: from storage DS-67143e06-ceaa-4030-ba21-80826ba16615 node DatanodeRegistration(127.0.0.1:34563, datanodeUuid=dbba8f00-6bd9-4753-bf9d-fb1cb1ebab05, infoPort=37441, infoSecurePort=0, ipcPort=43835, storageInfo=lv=-57;cid=testClusterID;nsid=2064364898;c=1676408504254), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-02-14 21:01:48,003 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc14bb1e7f292701c: Processing first storage report for DS-398c6661-5003-48e3-afbb-94dd8fec9206 from datanode 5a2e490c-60b6-48d5-8f8d-bbb4a086af3e 2023-02-14 21:01:48,004 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc14bb1e7f292701c: from storage DS-398c6661-5003-48e3-afbb-94dd8fec9206 node DatanodeRegistration(127.0.0.1:33873, datanodeUuid=5a2e490c-60b6-48d5-8f8d-bbb4a086af3e, infoPort=36437, infoSecurePort=0, ipcPort=38639, storageInfo=lv=-57;cid=testClusterID;nsid=2064364898;c=1676408504254), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:48,004 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe41794c3f9489f36: Processing first storage report for DS-2f99e0bb-e971-4518-8494-e870ede5263d from datanode 7afee2d3-f09e-43e2-ac6a-b6be431c5702 2023-02-14 21:01:48,004 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe41794c3f9489f36: from storage DS-2f99e0bb-e971-4518-8494-e870ede5263d node DatanodeRegistration(127.0.0.1:34287, datanodeUuid=7afee2d3-f09e-43e2-ac6a-b6be431c5702, infoPort=33253, infoSecurePort=0, ipcPort=37751, storageInfo=lv=-57;cid=testClusterID;nsid=2064364898;c=1676408504254), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-02-14 21:01:48,004 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x812fc67e108bddb7: Processing first storage report for DS-53b4cb8f-16f5-41a0-b226-efcddb893fa8 from datanode dbba8f00-6bd9-4753-bf9d-fb1cb1ebab05 2023-02-14 21:01:48,004 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x812fc67e108bddb7: from storage DS-53b4cb8f-16f5-41a0-b226-efcddb893fa8 node DatanodeRegistration(127.0.0.1:34563, datanodeUuid=dbba8f00-6bd9-4753-bf9d-fb1cb1ebab05, infoPort=37441, infoSecurePort=0, ipcPort=43835, storageInfo=lv=-57;cid=testClusterID;nsid=2064364898;c=1676408504254), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:48,004 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc14bb1e7f292701c: Processing first storage report for DS-648609c7-cb07-46e3-b778-2cf4616376ba from datanode 5a2e490c-60b6-48d5-8f8d-bbb4a086af3e 2023-02-14 21:01:48,005 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc14bb1e7f292701c: from storage DS-648609c7-cb07-46e3-b778-2cf4616376ba node DatanodeRegistration(127.0.0.1:33873, datanodeUuid=5a2e490c-60b6-48d5-8f8d-bbb4a086af3e, infoPort=36437, infoSecurePort=0, ipcPort=38639, storageInfo=lv=-57;cid=testClusterID;nsid=2064364898;c=1676408504254), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:48,005 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe41794c3f9489f36: Processing first storage report for DS-65863313-6b8d-4891-bb9c-d0f7c2e1c206 from datanode 7afee2d3-f09e-43e2-ac6a-b6be431c5702 2023-02-14 21:01:48,005 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe41794c3f9489f36: from storage DS-65863313-6b8d-4891-bb9c-d0f7c2e1c206 node DatanodeRegistration(127.0.0.1:34287, datanodeUuid=7afee2d3-f09e-43e2-ac6a-b6be431c5702, infoPort=33253, infoSecurePort=0, ipcPort=37751, storageInfo=lv=-57;cid=testClusterID;nsid=2064364898;c=1676408504254), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:48,079 DEBUG [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6 2023-02-14 21:01:48,129 INFO [Listener at localhost.localdomain/38639] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/zookeeper_0, clientPort=51069, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-02-14 21:01:48,142 INFO [Listener at localhost.localdomain/38639] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51069 2023-02-14 21:01:48,149 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:48,152 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:48,792 INFO [Listener at localhost.localdomain/38639] util.FSUtils(479): Created version file at hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e with version=8 2023-02-14 21:01:48,792 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/hbase-staging 2023-02-14 21:01:49,075 INFO [Listener at localhost.localdomain/38639] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-02-14 21:01:49,447 INFO [Listener at localhost.localdomain/38639] client.ConnectionUtils(127): master/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:49,472 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:49,473 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:49,473 INFO [Listener at localhost.localdomain/38639] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:49,473 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:49,474 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:49,591 INFO [Listener at localhost.localdomain/38639] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:49,649 DEBUG [Listener at localhost.localdomain/38639] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-02-14 21:01:49,722 INFO [Listener at localhost.localdomain/38639] ipc.NettyRpcServer(120): Bind to /136.243.104.168:43051 2023-02-14 21:01:49,731 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:49,732 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:49,749 INFO [Listener at localhost.localdomain/38639] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43051 connecting to ZooKeeper ensemble=127.0.0.1:51069 2023-02-14 21:01:49,948 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:430510x0, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:49,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): master:43051-0x10163479a3f0000 connected 2023-02-14 21:01:50,079 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:50,082 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:50,088 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:50,098 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43051 2023-02-14 21:01:50,098 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43051 2023-02-14 21:01:50,098 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43051 2023-02-14 21:01:50,099 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43051 2023-02-14 21:01:50,099 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43051 2023-02-14 21:01:50,104 INFO [Listener at localhost.localdomain/38639] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e, hbase.cluster.distributed=false 2023-02-14 21:01:50,161 INFO [Listener at localhost.localdomain/38639] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:50,162 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,162 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,162 INFO [Listener at localhost.localdomain/38639] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:50,162 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,162 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:50,166 INFO [Listener at localhost.localdomain/38639] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:50,169 INFO [Listener at localhost.localdomain/38639] ipc.NettyRpcServer(120): Bind to /136.243.104.168:37197 2023-02-14 21:01:50,171 INFO [Listener at localhost.localdomain/38639] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-14 21:01:50,176 DEBUG [Listener at localhost.localdomain/38639] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-14 21:01:50,177 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,179 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,181 INFO [Listener at localhost.localdomain/38639] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37197 connecting to ZooKeeper ensemble=127.0.0.1:51069 2023-02-14 21:01:50,192 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:371970x0, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:50,193 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:37197-0x10163479a3f0001 connected 2023-02-14 21:01:50,194 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:50,195 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:50,196 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:50,197 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37197 2023-02-14 21:01:50,197 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37197 2023-02-14 21:01:50,198 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37197 2023-02-14 21:01:50,198 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37197 2023-02-14 21:01:50,199 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37197 2023-02-14 21:01:50,213 INFO [Listener at localhost.localdomain/38639] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:50,214 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,214 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,214 INFO [Listener at localhost.localdomain/38639] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:50,215 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,215 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:50,215 INFO [Listener at localhost.localdomain/38639] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:50,216 INFO [Listener at localhost.localdomain/38639] ipc.NettyRpcServer(120): Bind to /136.243.104.168:42689 2023-02-14 21:01:50,217 INFO [Listener at localhost.localdomain/38639] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-14 21:01:50,218 DEBUG [Listener at localhost.localdomain/38639] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-14 21:01:50,218 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,220 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,222 INFO [Listener at localhost.localdomain/38639] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42689 connecting to ZooKeeper ensemble=127.0.0.1:51069 2023-02-14 21:01:50,234 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:426890x0, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:50,236 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:42689-0x10163479a3f0002 connected 2023-02-14 21:01:50,236 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:50,237 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:50,238 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:50,238 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42689 2023-02-14 21:01:50,239 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42689 2023-02-14 21:01:50,239 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42689 2023-02-14 21:01:50,239 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42689 2023-02-14 21:01:50,240 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42689 2023-02-14 21:01:50,253 INFO [Listener at localhost.localdomain/38639] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:50,253 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,254 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,254 INFO [Listener at localhost.localdomain/38639] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:50,254 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:50,254 INFO [Listener at localhost.localdomain/38639] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:50,254 INFO [Listener at localhost.localdomain/38639] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:50,256 INFO [Listener at localhost.localdomain/38639] ipc.NettyRpcServer(120): Bind to /136.243.104.168:38623 2023-02-14 21:01:50,257 INFO [Listener at localhost.localdomain/38639] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-14 21:01:50,257 DEBUG [Listener at localhost.localdomain/38639] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-14 21:01:50,258 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,260 INFO [Listener at localhost.localdomain/38639] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,262 INFO [Listener at localhost.localdomain/38639] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38623 connecting to ZooKeeper ensemble=127.0.0.1:51069 2023-02-14 21:01:50,276 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:386230x0, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:50,278 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:38623-0x10163479a3f0003 connected 2023-02-14 21:01:50,278 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:50,279 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:50,280 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ZKUtil(164): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:50,281 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38623 2023-02-14 21:01:50,281 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38623 2023-02-14 21:01:50,281 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38623 2023-02-14 21:01:50,282 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38623 2023-02-14 21:01:50,283 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38623 2023-02-14 21:01:50,284 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:50,305 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-14 21:01:50,307 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:50,336 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:50,337 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:50,336 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:50,337 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:50,338 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:50,339 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-14 21:01:50,341 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase12.apache.org,43051,1676408508905 from backup master directory 2023-02-14 21:01:50,341 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-14 21:01:50,350 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:50,350 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-14 21:01:50,351 WARN [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:50,351 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:50,356 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-02-14 21:01:50,358 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-02-14 21:01:50,445 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] util.FSUtils(628): Created cluster ID file at hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/hbase.id with ID: b939fc86-6e85-4f9b-aef8-784976f44f4b 2023-02-14 21:01:50,482 INFO [master/jenkins-hbase12:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:50,508 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:50,555 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x073e06b0 to 127.0.0.1:51069 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:50,597 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2822ce1c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:50,617 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-02-14 21:01:50,619 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-02-14 21:01:50,633 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-02-14 21:01:50,633 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-02-14 21:01:50,635 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-02-14 21:01:50,639 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-02-14 21:01:50,640 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:50,667 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7689): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store-tmp 2023-02-14 21:01:50,701 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:50,701 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-14 21:01:50,701 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:50,701 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:50,701 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-14 21:01:50,701 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:50,702 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:50,702 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-02-14 21:01:50,703 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/WALs/jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:50,722 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C43051%2C1676408508905, suffix=, logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/WALs/jenkins-hbase12.apache.org,43051,1676408508905, archiveDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/oldWALs, maxLogs=10 2023-02-14 21:01:50,770 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK] 2023-02-14 21:01:50,770 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK] 2023-02-14 21:01:50,770 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK] 2023-02-14 21:01:50,778 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-02-14 21:01:50,842 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/WALs/jenkins-hbase12.apache.org,43051,1676408508905/jenkins-hbase12.apache.org%2C43051%2C1676408508905.1676408510730 2023-02-14 21:01:50,842 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK], DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK], DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK]] 2023-02-14 21:01:50,843 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7850): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-02-14 21:01:50,843 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:50,847 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7890): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:50,848 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7893): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:50,901 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:50,908 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-02-14 21:01:50,927 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-02-14 21:01:50,937 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:50,943 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:50,944 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:50,958 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1054): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:50,962 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-14 21:01:50,962 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1071): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=73683350, jitterRate=0.0979674756526947}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-14 21:01:50,963 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(964): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-02-14 21:01:50,964 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-02-14 21:01:50,983 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-02-14 21:01:50,984 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-02-14 21:01:50,985 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-02-14 21:01:50,987 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-02-14 21:01:51,011 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 24 msec 2023-02-14 21:01:51,011 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-02-14 21:01:51,031 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-02-14 21:01:51,036 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-02-14 21:01:51,060 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-02-14 21:01:51,064 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-02-14 21:01:51,066 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-02-14 21:01:51,070 INFO [master/jenkins-hbase12:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-02-14 21:01:51,073 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-02-14 21:01:51,126 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:51,127 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-02-14 21:01:51,128 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-02-14 21:01:51,138 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-02-14 21:01:51,150 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:51,150 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:51,150 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:51,150 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:51,150 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:51,151 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase12.apache.org,43051,1676408508905, sessionid=0x10163479a3f0000, setting cluster-up flag (Was=false) 2023-02-14 21:01:51,181 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:51,213 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-02-14 21:01:51,215 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:51,234 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:51,266 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-02-14 21:01:51,267 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:51,271 WARN [master/jenkins-hbase12:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/.hbase-snapshot/.tmp 2023-02-14 21:01:51,287 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(952): ClusterId : b939fc86-6e85-4f9b-aef8-784976f44f4b 2023-02-14 21:01:51,287 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(952): ClusterId : b939fc86-6e85-4f9b-aef8-784976f44f4b 2023-02-14 21:01:51,287 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(952): ClusterId : b939fc86-6e85-4f9b-aef8-784976f44f4b 2023-02-14 21:01:51,291 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-14 21:01:51,291 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-14 21:01:51,291 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-14 21:01:51,317 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-14 21:01:51,317 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-14 21:01:51,317 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-14 21:01:51,317 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-14 21:01:51,317 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-14 21:01:51,317 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-14 21:01:51,340 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-14 21:01:51,340 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-14 21:01:51,340 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-14 21:01:51,342 DEBUG [RS:2;jenkins-hbase12:38623] zookeeper.ReadOnlyZKClient(139): Connect 0x081cf657 to 127.0.0.1:51069 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:51,343 DEBUG [RS:1;jenkins-hbase12:42689] zookeeper.ReadOnlyZKClient(139): Connect 0x77e29910 to 127.0.0.1:51069 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:51,343 DEBUG [RS:0;jenkins-hbase12:37197] zookeeper.ReadOnlyZKClient(139): Connect 0x2374c727 to 127.0.0.1:51069 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:51,400 DEBUG [RS:2;jenkins-hbase12:38623] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6db1924, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:51,401 DEBUG [RS:0;jenkins-hbase12:37197] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db33d39, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:51,401 DEBUG [RS:2;jenkins-hbase12:38623] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e49d27, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:51,401 DEBUG [RS:1;jenkins-hbase12:42689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7925699c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:51,401 DEBUG [RS:0;jenkins-hbase12:37197] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@298cac57, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:51,402 DEBUG [RS:1;jenkins-hbase12:42689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d1af260, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:51,411 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-02-14 21:01:51,419 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase12:38623 2023-02-14 21:01:51,421 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase12:37197 2023-02-14 21:01:51,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:51,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:51,423 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase12:42689 2023-02-14 21:01:51,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:51,424 INFO [RS:0;jenkins-hbase12:37197] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-14 21:01:51,424 INFO [RS:1;jenkins-hbase12:42689] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-14 21:01:51,424 INFO [RS:1;jenkins-hbase12:42689] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-14 21:01:51,424 INFO [RS:0;jenkins-hbase12:37197] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-14 21:01:51,424 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:51,424 INFO [RS:2;jenkins-hbase12:38623] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-14 21:01:51,424 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase12:0, corePoolSize=10, maxPoolSize=10 2023-02-14 21:01:51,424 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1023): About to register with Master. 2023-02-14 21:01:51,424 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1023): About to register with Master. 2023-02-14 21:01:51,425 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,424 INFO [RS:2;jenkins-hbase12:38623] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-14 21:01:51,425 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:51,425 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1023): About to register with Master. 2023-02-14 21:01:51,425 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,427 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1676408541427 2023-02-14 21:01:51,427 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,43051,1676408508905 with isa=jenkins-hbase12.apache.org/136.243.104.168:38623, startcode=1676408510252 2023-02-14 21:01:51,427 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,43051,1676408508905 with isa=jenkins-hbase12.apache.org/136.243.104.168:37197, startcode=1676408510161 2023-02-14 21:01:51,427 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,43051,1676408508905 with isa=jenkins-hbase12.apache.org/136.243.104.168:42689, startcode=1676408510213 2023-02-14 21:01:51,429 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-02-14 21:01:51,433 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-02-14 21:01:51,434 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-02-14 21:01:51,439 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-14 21:01:51,442 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-02-14 21:01:51,448 DEBUG [RS:2;jenkins-hbase12:38623] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-14 21:01:51,448 DEBUG [RS:0;jenkins-hbase12:37197] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-14 21:01:51,448 DEBUG [RS:1;jenkins-hbase12:42689] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-14 21:01:51,450 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-02-14 21:01:51,451 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-02-14 21:01:51,451 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-02-14 21:01:51,452 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-02-14 21:01:51,453 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,455 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-02-14 21:01:51,456 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-02-14 21:01:51,456 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-02-14 21:01:51,458 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-02-14 21:01:51,459 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-02-14 21:01:51,460 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1676408511460,5,FailOnTimeoutGroup] 2023-02-14 21:01:51,461 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1676408511460,5,FailOnTimeoutGroup] 2023-02-14 21:01:51,461 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,461 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-02-14 21:01:51,462 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,462 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,490 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-14 21:01:51,494 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-14 21:01:51,494 INFO [PEWorker-1] regionserver.HRegion(7671): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e 2023-02-14 21:01:51,520 DEBUG [PEWorker-1] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:51,522 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:40695, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-02-14 21:01:51,522 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:55357, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-02-14 21:01:51,522 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:45993, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-02-14 21:01:51,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-14 21:01:51,529 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/info 2023-02-14 21:01:51,529 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-14 21:01:51,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:51,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-14 21:01:51,534 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/rep_barrier 2023-02-14 21:01:51,534 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-14 21:01:51,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:51,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-14 21:01:51,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43051] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,537 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43051] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43051] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,539 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/table 2023-02-14 21:01:51,540 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-14 21:01:51,541 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:51,543 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740 2023-02-14 21:01:51,544 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740 2023-02-14 21:01:51,548 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-14 21:01:51,551 DEBUG [PEWorker-1] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-14 21:01:51,555 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-14 21:01:51,556 INFO [PEWorker-1] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=65192171, jitterRate=-0.028560951352119446}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-14 21:01:51,556 DEBUG [PEWorker-1] regionserver.HRegion(964): Region open journal for 1588230740: 2023-02-14 21:01:51,556 DEBUG [PEWorker-1] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-14 21:01:51,556 INFO [PEWorker-1] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-14 21:01:51,557 DEBUG [PEWorker-1] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-14 21:01:51,557 DEBUG [PEWorker-1] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-14 21:01:51,557 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e 2023-02-14 21:01:51,557 DEBUG [PEWorker-1] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-14 21:01:51,557 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e 2023-02-14 21:01:51,557 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e 2023-02-14 21:01:51,557 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40959 2023-02-14 21:01:51,557 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40959 2023-02-14 21:01:51,558 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-14 21:01:51,557 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40959 2023-02-14 21:01:51,558 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-14 21:01:51,558 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-14 21:01:51,558 INFO [PEWorker-1] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-14 21:01:51,559 DEBUG [PEWorker-1] regionserver.HRegion(1557): Region close journal for 1588230740: 2023-02-14 21:01:51,564 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-02-14 21:01:51,564 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-02-14 21:01:51,572 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-02-14 21:01:51,583 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-02-14 21:01:51,586 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-02-14 21:01:51,599 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:51,600 DEBUG [RS:2;jenkins-hbase12:38623] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,600 DEBUG [RS:1;jenkins-hbase12:42689] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,600 DEBUG [RS:0;jenkins-hbase12:37197] zookeeper.ZKUtil(162): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,600 WARN [RS:1;jenkins-hbase12:42689] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:51,601 WARN [RS:0;jenkins-hbase12:37197] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:51,601 INFO [RS:1;jenkins-hbase12:42689] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:51,600 WARN [RS:2;jenkins-hbase12:38623] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:51,601 INFO [RS:0;jenkins-hbase12:37197] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:51,602 INFO [RS:2;jenkins-hbase12:38623] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:51,602 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,602 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,602 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,603 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,42689,1676408510213] 2023-02-14 21:01:51,603 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,37197,1676408510161] 2023-02-14 21:01:51,603 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,38623,1676408510252] 2023-02-14 21:01:51,617 DEBUG [RS:2;jenkins-hbase12:38623] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,617 DEBUG [RS:0;jenkins-hbase12:37197] zookeeper.ZKUtil(162): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,617 DEBUG [RS:1;jenkins-hbase12:42689] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,618 DEBUG [RS:2;jenkins-hbase12:38623] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,618 DEBUG [RS:0;jenkins-hbase12:37197] zookeeper.ZKUtil(162): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,618 DEBUG [RS:1;jenkins-hbase12:42689] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,618 DEBUG [RS:2;jenkins-hbase12:38623] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,618 DEBUG [RS:0;jenkins-hbase12:37197] zookeeper.ZKUtil(162): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,618 DEBUG [RS:1;jenkins-hbase12:42689] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,627 DEBUG [RS:0;jenkins-hbase12:37197] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-14 21:01:51,627 DEBUG [RS:2;jenkins-hbase12:38623] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-14 21:01:51,627 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-14 21:01:51,635 INFO [RS:0;jenkins-hbase12:37197] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-14 21:01:51,635 INFO [RS:1;jenkins-hbase12:42689] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-14 21:01:51,635 INFO [RS:2;jenkins-hbase12:38623] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-14 21:01:51,654 INFO [RS:2;jenkins-hbase12:38623] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-14 21:01:51,654 INFO [RS:1;jenkins-hbase12:42689] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-14 21:01:51,654 INFO [RS:0;jenkins-hbase12:37197] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-14 21:01:51,657 INFO [RS:2;jenkins-hbase12:38623] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-14 21:01:51,657 INFO [RS:1;jenkins-hbase12:42689] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-14 21:01:51,658 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,657 INFO [RS:0;jenkins-hbase12:37197] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-14 21:01:51,658 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,658 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,659 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-14 21:01:51,659 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-14 21:01:51,659 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-14 21:01:51,667 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,667 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,667 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,668 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,668 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,668 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,668 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,668 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,668 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:51,669 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:51,669 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:51,669 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,669 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:2;jenkins-hbase12:38623] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:0;jenkins-hbase12:37197] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,670 DEBUG [RS:1;jenkins-hbase12:42689] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:51,673 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,673 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,673 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,674 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,674 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,674 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,674 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,674 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,674 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,686 INFO [RS:2;jenkins-hbase12:38623] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-14 21:01:51,689 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,38623,1676408510252-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,689 INFO [RS:1;jenkins-hbase12:42689] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-14 21:01:51,689 INFO [RS:0;jenkins-hbase12:37197] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-14 21:01:51,690 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42689,1676408510213-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,690 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,37197,1676408510161-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:51,704 INFO [RS:0;jenkins-hbase12:37197] regionserver.Replication(203): jenkins-hbase12.apache.org,37197,1676408510161 started 2023-02-14 21:01:51,704 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,37197,1676408510161, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:37197, sessionid=0x10163479a3f0001 2023-02-14 21:01:51,704 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-14 21:01:51,704 DEBUG [RS:0;jenkins-hbase12:37197] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,705 DEBUG [RS:0;jenkins-hbase12:37197] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,37197,1676408510161' 2023-02-14 21:01:51,705 DEBUG [RS:0;jenkins-hbase12:37197] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-14 21:01:51,705 DEBUG [RS:0;jenkins-hbase12:37197] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-14 21:01:51,706 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-14 21:01:51,706 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-14 21:01:51,706 DEBUG [RS:0;jenkins-hbase12:37197] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:51,706 DEBUG [RS:0;jenkins-hbase12:37197] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,37197,1676408510161' 2023-02-14 21:01:51,706 DEBUG [RS:0;jenkins-hbase12:37197] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-14 21:01:51,706 DEBUG [RS:0;jenkins-hbase12:37197] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-14 21:01:51,707 DEBUG [RS:0;jenkins-hbase12:37197] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-14 21:01:51,707 INFO [RS:0;jenkins-hbase12:37197] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-14 21:01:51,707 INFO [RS:0;jenkins-hbase12:37197] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-14 21:01:51,710 INFO [RS:2;jenkins-hbase12:38623] regionserver.Replication(203): jenkins-hbase12.apache.org,38623,1676408510252 started 2023-02-14 21:01:51,711 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,38623,1676408510252, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:38623, sessionid=0x10163479a3f0003 2023-02-14 21:01:51,713 INFO [RS:1;jenkins-hbase12:42689] regionserver.Replication(203): jenkins-hbase12.apache.org,42689,1676408510213 started 2023-02-14 21:01:51,713 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-14 21:01:51,713 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,42689,1676408510213, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:42689, sessionid=0x10163479a3f0002 2023-02-14 21:01:51,713 DEBUG [RS:2;jenkins-hbase12:38623] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,713 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-14 21:01:51,714 DEBUG [RS:1;jenkins-hbase12:42689] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,714 DEBUG [RS:1;jenkins-hbase12:42689] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,42689,1676408510213' 2023-02-14 21:01:51,714 DEBUG [RS:1;jenkins-hbase12:42689] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-14 21:01:51,714 DEBUG [RS:2;jenkins-hbase12:38623] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,38623,1676408510252' 2023-02-14 21:01:51,714 DEBUG [RS:2;jenkins-hbase12:38623] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-14 21:01:51,715 DEBUG [RS:1;jenkins-hbase12:42689] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-14 21:01:51,715 DEBUG [RS:2;jenkins-hbase12:38623] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-14 21:01:51,715 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-14 21:01:51,715 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-14 21:01:51,715 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-14 21:01:51,715 DEBUG [RS:1;jenkins-hbase12:42689] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,715 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-14 21:01:51,715 DEBUG [RS:1;jenkins-hbase12:42689] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,42689,1676408510213' 2023-02-14 21:01:51,716 DEBUG [RS:1;jenkins-hbase12:42689] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-14 21:01:51,716 DEBUG [RS:2;jenkins-hbase12:38623] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:51,716 DEBUG [RS:2;jenkins-hbase12:38623] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,38623,1676408510252' 2023-02-14 21:01:51,716 DEBUG [RS:2;jenkins-hbase12:38623] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-14 21:01:51,716 DEBUG [RS:1;jenkins-hbase12:42689] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-14 21:01:51,716 DEBUG [RS:2;jenkins-hbase12:38623] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-14 21:01:51,716 DEBUG [RS:1;jenkins-hbase12:42689] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-14 21:01:51,717 INFO [RS:1;jenkins-hbase12:42689] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-14 21:01:51,717 DEBUG [RS:2;jenkins-hbase12:38623] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-14 21:01:51,717 INFO [RS:1;jenkins-hbase12:42689] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-14 21:01:51,717 INFO [RS:2;jenkins-hbase12:38623] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-14 21:01:51,717 INFO [RS:2;jenkins-hbase12:38623] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-14 21:01:51,738 DEBUG [jenkins-hbase12:43051] assignment.AssignmentManager(2178): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-02-14 21:01:51,742 DEBUG [jenkins-hbase12:43051] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-14 21:01:51,747 DEBUG [jenkins-hbase12:43051] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-14 21:01:51,747 DEBUG [jenkins-hbase12:43051] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-14 21:01:51,747 DEBUG [jenkins-hbase12:43051] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-14 21:01:51,747 DEBUG [jenkins-hbase12:43051] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-14 21:01:51,749 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,42689,1676408510213, state=OPENING 2023-02-14 21:01:51,771 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-02-14 21:01:51,781 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:51,783 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-14 21:01:51,787 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,42689,1676408510213}] 2023-02-14 21:01:51,818 INFO [RS:0;jenkins-hbase12:37197] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C37197%2C1676408510161, suffix=, logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,37197,1676408510161, archiveDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs, maxLogs=32 2023-02-14 21:01:51,821 INFO [RS:2;jenkins-hbase12:38623] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C38623%2C1676408510252, suffix=, logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,38623,1676408510252, archiveDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs, maxLogs=32 2023-02-14 21:01:51,822 INFO [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C42689%2C1676408510213, suffix=, logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,42689,1676408510213, archiveDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs, maxLogs=32 2023-02-14 21:01:51,842 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK] 2023-02-14 21:01:51,845 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK] 2023-02-14 21:01:51,845 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK] 2023-02-14 21:01:51,845 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK] 2023-02-14 21:01:51,847 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK] 2023-02-14 21:01:51,847 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK] 2023-02-14 21:01:51,847 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK] 2023-02-14 21:01:51,847 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK] 2023-02-14 21:01:51,847 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK] 2023-02-14 21:01:51,862 INFO [RS:0;jenkins-hbase12:37197] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,37197,1676408510161/jenkins-hbase12.apache.org%2C37197%2C1676408510161.1676408511824 2023-02-14 21:01:51,862 INFO [RS:2;jenkins-hbase12:38623] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,38623,1676408510252/jenkins-hbase12.apache.org%2C38623%2C1676408510252.1676408511825 2023-02-14 21:01:51,862 INFO [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,42689,1676408510213/jenkins-hbase12.apache.org%2C42689%2C1676408510213.1676408511825 2023-02-14 21:01:51,862 DEBUG [RS:0;jenkins-hbase12:37197] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK], DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK], DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK]] 2023-02-14 21:01:51,863 DEBUG [RS:2;jenkins-hbase12:38623] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK], DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK], DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK]] 2023-02-14 21:01:51,863 DEBUG [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK], DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK], DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK]] 2023-02-14 21:01:51,978 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:51,980 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-14 21:01:51,984 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:37804, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-14 21:01:51,998 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:meta,,1.1588230740 2023-02-14 21:01:51,998 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:52,002 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C42689%2C1676408510213.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,42689,1676408510213, archiveDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs, maxLogs=32 2023-02-14 21:01:52,019 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK] 2023-02-14 21:01:52,021 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK] 2023-02-14 21:01:52,021 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK] 2023-02-14 21:01:52,028 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/WALs/jenkins-hbase12.apache.org,42689,1676408510213/jenkins-hbase12.apache.org%2C42689%2C1676408510213.meta.1676408512003.meta 2023-02-14 21:01:52,028 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34563,DS-67143e06-ceaa-4030-ba21-80826ba16615,DISK], DatanodeInfoWithStorage[127.0.0.1:34287,DS-2f99e0bb-e971-4518-8494-e870ede5263d,DISK], DatanodeInfoWithStorage[127.0.0.1:33873,DS-398c6661-5003-48e3-afbb-94dd8fec9206,DISK]] 2023-02-14 21:01:52,028 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-02-14 21:01:52,030 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-02-14 21:01:52,046 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(8546): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-02-14 21:01:52,050 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-02-14 21:01:52,053 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-02-14 21:01:52,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:52,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 1588230740 2023-02-14 21:01:52,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 1588230740 2023-02-14 21:01:52,057 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-14 21:01:52,058 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/info 2023-02-14 21:01:52,058 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/info 2023-02-14 21:01:52,059 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-14 21:01:52,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:52,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-14 21:01:52,062 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/rep_barrier 2023-02-14 21:01:52,062 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/rep_barrier 2023-02-14 21:01:52,062 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-14 21:01:52,063 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:52,063 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-14 21:01:52,064 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/table 2023-02-14 21:01:52,065 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/table 2023-02-14 21:01:52,065 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-14 21:01:52,066 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:52,068 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740 2023-02-14 21:01:52,072 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740 2023-02-14 21:01:52,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-14 21:01:52,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-14 21:01:52,079 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=72833814, jitterRate=0.08530840277671814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-14 21:01:52,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 1588230740: 2023-02-14 21:01:52,090 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1676408511971 2023-02-14 21:01:52,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:meta,,1.1588230740 2023-02-14 21:01:52,108 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:meta,,1.1588230740 2023-02-14 21:01:52,109 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,42689,1676408510213, state=OPEN 2023-02-14 21:01:52,182 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-02-14 21:01:52,182 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-14 21:01:52,190 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-02-14 21:01:52,193 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,42689,1676408510213 in 395 msec 2023-02-14 21:01:52,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-02-14 21:01:52,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 622 msec 2023-02-14 21:01:52,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 889 msec 2023-02-14 21:01:52,205 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1676408512205, completionTime=-1 2023-02-14 21:01:52,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-02-14 21:01:52,206 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1519): Joining cluster... 2023-02-14 21:01:52,259 DEBUG [hconnection-0x57277a4d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-14 21:01:52,262 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:37810, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-14 21:01:52,276 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1531): Number of RegionServers=3 2023-02-14 21:01:52,276 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1676408572276 2023-02-14 21:01:52,276 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1676408632276 2023-02-14 21:01:52,276 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1538): Joined the cluster in 70 msec 2023-02-14 21:01:52,329 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,43051,1676408508905-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:52,329 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,43051,1676408508905-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:52,329 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,43051,1676408508905-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:52,332 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase12:43051, period=300000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:52,332 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:52,340 DEBUG [master/jenkins-hbase12:0.Chore.1] janitor.CatalogJanitor(175): 2023-02-14 21:01:52,347 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-02-14 21:01:52,348 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-14 21:01:52,356 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-02-14 21:01:52,358 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-02-14 21:01:52,361 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-02-14 21:01:52,382 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/.tmp/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,385 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/.tmp/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448 empty. 2023-02-14 21:01:52,386 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/.tmp/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,386 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-02-14 21:01:52,427 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-02-14 21:01:52,429 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7671): creating {ENCODED => a74fd22b7a0f88c803b87cf527a37448, NAME => 'hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/.tmp 2023-02-14 21:01:52,449 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:52,449 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1603): Closing a74fd22b7a0f88c803b87cf527a37448, disabling compactions & flushes 2023-02-14 21:01:52,449 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,449 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,449 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. after waiting 0 ms 2023-02-14 21:01:52,449 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,449 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1837): Closed hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,449 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1557): Region close journal for a74fd22b7a0f88c803b87cf527a37448: 2023-02-14 21:01:52,454 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-02-14 21:01:52,470 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1676408512458"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1676408512458"}]},"ts":"1676408512458"} 2023-02-14 21:01:52,495 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-02-14 21:01:52,497 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-02-14 21:01:52,502 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1676408512498"}]},"ts":"1676408512498"} 2023-02-14 21:01:52,507 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-02-14 21:01:52,529 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-14 21:01:52,531 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-14 21:01:52,531 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-14 21:01:52,531 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-14 21:01:52,531 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-14 21:01:52,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a74fd22b7a0f88c803b87cf527a37448, ASSIGN}] 2023-02-14 21:01:52,539 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a74fd22b7a0f88c803b87cf527a37448, ASSIGN 2023-02-14 21:01:52,541 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a74fd22b7a0f88c803b87cf527a37448, ASSIGN; state=OFFLINE, location=jenkins-hbase12.apache.org,42689,1676408510213; forceNewPlan=false, retain=false 2023-02-14 21:01:52,694 INFO [jenkins-hbase12:43051] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-02-14 21:01:52,696 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a74fd22b7a0f88c803b87cf527a37448, regionState=OPENING, regionLocation=jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:52,696 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1676408512695"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1676408512695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1676408512695"}]},"ts":"1676408512695"} 2023-02-14 21:01:52,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure a74fd22b7a0f88c803b87cf527a37448, server=jenkins-hbase12.apache.org,42689,1676408510213}] 2023-02-14 21:01:52,870 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => a74fd22b7a0f88c803b87cf527a37448, NAME => 'hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.', STARTKEY => '', ENDKEY => ''} 2023-02-14 21:01:52,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:52,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,874 INFO [StoreOpener-a74fd22b7a0f88c803b87cf527a37448-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,876 DEBUG [StoreOpener-a74fd22b7a0f88c803b87cf527a37448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/info 2023-02-14 21:01:52,876 DEBUG [StoreOpener-a74fd22b7a0f88c803b87cf527a37448-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/info 2023-02-14 21:01:52,877 INFO [StoreOpener-a74fd22b7a0f88c803b87cf527a37448-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a74fd22b7a0f88c803b87cf527a37448 columnFamilyName info 2023-02-14 21:01:52,878 INFO [StoreOpener-a74fd22b7a0f88c803b87cf527a37448-1] regionserver.HStore(310): Store=a74fd22b7a0f88c803b87cf527a37448/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:52,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:52,889 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-14 21:01:52,890 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened a74fd22b7a0f88c803b87cf527a37448; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=68589634, jitterRate=0.022065192461013794}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-14 21:01:52,890 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for a74fd22b7a0f88c803b87cf527a37448: 2023-02-14 21:01:52,892 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448., pid=6, masterSystemTime=1676408512858 2023-02-14 21:01:52,896 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,896 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:52,898 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a74fd22b7a0f88c803b87cf527a37448, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:52,898 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1676408512897"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1676408512897"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1676408512897"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1676408512897"}]},"ts":"1676408512897"} 2023-02-14 21:01:52,905 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-02-14 21:01:52,905 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure a74fd22b7a0f88c803b87cf527a37448, server=jenkins-hbase12.apache.org,42689,1676408510213 in 198 msec 2023-02-14 21:01:52,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-02-14 21:01:52,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a74fd22b7a0f88c803b87cf527a37448, ASSIGN in 370 msec 2023-02-14 21:01:52,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-02-14 21:01:52,911 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1676408512911"}]},"ts":"1676408512911"} 2023-02-14 21:01:52,914 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-02-14 21:01:53,004 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-02-14 21:01:53,004 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-02-14 21:01:53,007 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 655 msec 2023-02-14 21:01:53,013 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-02-14 21:01:53,013 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:53,053 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-02-14 21:01:53,076 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-14 21:01:53,092 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 49 msec 2023-02-14 21:01:53,096 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-02-14 21:01:53,118 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-14 21:01:53,134 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 36 msec 2023-02-14 21:01:53,160 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-02-14 21:01:53,181 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-02-14 21:01:53,182 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.830sec 2023-02-14 21:01:53,184 INFO [master/jenkins-hbase12:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-02-14 21:01:53,185 INFO [master/jenkins-hbase12:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-02-14 21:01:53,185 INFO [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-02-14 21:01:53,187 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,43051,1676408508905-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-02-14 21:01:53,188 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,43051,1676408508905-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-02-14 21:01:53,194 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ReadOnlyZKClient(139): Connect 0x3c1b812c to 127.0.0.1:51069 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:53,199 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-02-14 21:01:53,211 DEBUG [Listener at localhost.localdomain/38639] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c1adaba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:53,224 DEBUG [hconnection-0x2b2eb6c5-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-14 21:01:53,236 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:37822, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-14 21:01:53,244 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:53,244 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ReadOnlyZKClient(139): Connect 0x33208e2d to 127.0.0.1:51069 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:53,263 DEBUG [ReadOnlyZKClient-127.0.0.1:51069@0x33208e2d] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4618c6da, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:53,292 DEBUG [Listener at localhost.localdomain/38639] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-14 21:01:53,318 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:38002, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-14 21:01:53,320 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37197] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,37197,1676408510161' ***** 2023-02-14 21:01:53,320 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37197] regionserver.HRegionServer(2310): STOPPED: Called by admin client org.apache.hadoop.hbase.client.AsyncConnectionImpl@5a8728ac 2023-02-14 21:01:53,321 INFO [RS:0;jenkins-hbase12:37197] regionserver.HeapMemoryManager(220): Stopping 2023-02-14 21:01:53,321 INFO [RS:0;jenkins-hbase12:37197] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-14 21:01:53,321 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-14 21:01:53,322 INFO [RS:0;jenkins-hbase12:37197] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-14 21:01:53,322 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:53,322 DEBUG [RS:0;jenkins-hbase12:37197] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2374c727 to 127.0.0.1:51069 2023-02-14 21:01:53,323 DEBUG [RS:0;jenkins-hbase12:37197] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,323 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,37197,1676408510161; all regions closed. 2023-02-14 21:01:53,327 DEBUG [Listener at localhost.localdomain/38639] client.ConnectionUtils(586): Start fetching master stub from registry 2023-02-14 21:01:53,342 DEBUG [RS:0;jenkins-hbase12:37197] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs 2023-02-14 21:01:53,342 INFO [RS:0;jenkins-hbase12:37197] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C37197%2C1676408510161:(num 1676408511824) 2023-02-14 21:01:53,342 DEBUG [RS:0;jenkins-hbase12:37197] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,342 INFO [RS:0;jenkins-hbase12:37197] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:01:53,344 INFO [RS:0;jenkins-hbase12:37197] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-02-14 21:01:53,344 INFO [RS:0;jenkins-hbase12:37197] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-14 21:01:53,344 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:01:53,344 INFO [RS:0;jenkins-hbase12:37197] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-14 21:01:53,344 INFO [RS:0;jenkins-hbase12:37197] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-14 21:01:53,345 INFO [RS:0;jenkins-hbase12:37197] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:37197 2023-02-14 21:01:53,376 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:01:53,421 DEBUG [ReadOnlyZKClient-127.0.0.1:51069@0x33208e2d] client.AsyncConnectionImpl(289): The fetched master address is jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:53,426 DEBUG [ReadOnlyZKClient-127.0.0.1:51069@0x33208e2d] client.ConnectionUtils(594): The fetched master stub is org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$Stub@114ce486 2023-02-14 21:01:53,432 DEBUG [ReadOnlyZKClient-127.0.0.1:51069@0x33208e2d] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-02-14 21:01:53,435 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:58970, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-02-14 21:01:53,436 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43051] master.MasterRpcServices(1601): Client=jenkins//136.243.104.168 stop 2023-02-14 21:01:53,436 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43051] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,43051,1676408508905' ***** 2023-02-14 21:01:53,436 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43051] regionserver.HRegionServer(2310): STOPPED: Stopped by RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43051 2023-02-14 21:01:53,436 DEBUG [M:0;jenkins-hbase12:43051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7939085b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:53,437 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,43051,1676408508905 2023-02-14 21:01:53,437 DEBUG [M:0;jenkins-hbase12:43051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x073e06b0 to 127.0.0.1:51069 2023-02-14 21:01:53,437 DEBUG [M:0;jenkins-hbase12:43051] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,437 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,43051,1676408508905; all regions closed. 2023-02-14 21:01:53,437 DEBUG [M:0;jenkins-hbase12:43051] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,437 DEBUG [M:0;jenkins-hbase12:43051] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-02-14 21:01:53,437 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-02-14 21:01:53,437 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1676408511460] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1676408511460,5,FailOnTimeoutGroup] 2023-02-14 21:01:53,437 DEBUG [M:0;jenkins-hbase12:43051] cleaner.HFileCleaner(317): Stopping file delete threads 2023-02-14 21:01:53,437 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1676408511460] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1676408511460,5,FailOnTimeoutGroup] 2023-02-14 21:01:53,439 INFO [M:0;jenkins-hbase12:43051] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-02-14 21:01:53,439 INFO [M:0;jenkins-hbase12:43051] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-02-14 21:01:53,439 INFO [M:0;jenkins-hbase12:43051] hbase.ChoreService(369): Chore service for: master/jenkins-hbase12:0 had [] on shutdown 2023-02-14 21:01:53,439 DEBUG [M:0;jenkins-hbase12:43051] master.HMaster(1512): Stopping service threads 2023-02-14 21:01:53,440 INFO [M:0;jenkins-hbase12:43051] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-02-14 21:01:53,440 INFO [M:0;jenkins-hbase12:43051] procedure2.ProcedureExecutor(629): Stopping 2023-02-14 21:01:53,441 ERROR [M:0;jenkins-hbase12:43051] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] 2023-02-14 21:01:53,442 INFO [M:0;jenkins-hbase12:43051] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-02-14 21:01:53,442 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-02-14 21:01:53,498 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:53,498 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:53,498 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 2023-02-14 21:01:53,498 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:53,498 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:53,498 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@68f944a1 rejected from java.util.concurrent.ThreadPoolExecutor@6cc6cd66[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:53,500 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:53,498 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:53,501 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3ad953a2 rejected from java.util.concurrent.ThreadPoolExecutor@6cc6cd66[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:53,590 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:53,590 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:53,590 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:53,590 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:53,591 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@4250879b rejected from java.util.concurrent.ThreadPoolExecutor@6cc6cd66[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:53,590 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:53,595 INFO [Listener at localhost.localdomain/38639] client.AsyncConnectionImpl(207): Connection has been closed by Listener at localhost.localdomain/38639. 2023-02-14 21:01:53,595 DEBUG [Listener at localhost.localdomain/38639] client.AsyncConnectionImpl(232): Call stack: at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.client.AsyncConnectionImpl.close(AsyncConnectionImpl.java:209) at org.apache.hbase.thirdparty.com.google.common.io.Closeables.close(Closeables.java:79) at org.apache.hadoop.hbase.client.TestAsyncClusterAdminApi2.tearDown(TestAsyncClusterAdminApi2.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-02-14 21:01:53,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:53,600 DEBUG [M:0;jenkins-hbase12:43051] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-02-14 21:01:53,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:53,600 DEBUG [M:0;jenkins-hbase12:43051] master.ActiveMasterManager(335): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-02-14 21:01:53,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:53,600 INFO [M:0;jenkins-hbase12:43051] assignment.AssignmentManager(315): Stopping assignment manager 2023-02-14 21:01:53,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,601 INFO [M:0;jenkins-hbase12:43051] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-02-14 21:01:53,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,601 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 znode expired, triggering replicatorRemoved event 2023-02-14 21:01:53,602 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase12.apache.org,37197,1676408510161 znode expired, triggering replicatorRemoved event 2023-02-14 21:01:53,602 DEBUG [M:0;jenkins-hbase12:43051] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-14 21:01:53,602 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:53,602 DEBUG [M:0;jenkins-hbase12:43051] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:53,602 DEBUG [M:0;jenkins-hbase12:43051] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-14 21:01:53,602 DEBUG [M:0;jenkins-hbase12:43051] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:53,603 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegion(2744): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-02-14 21:01:53,604 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:53,604 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:53,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:53,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:53,605 DEBUG [Listener at localhost.localdomain/38639] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,606 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33208e2d to 127.0.0.1:51069 2023-02-14 21:01:53,606 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-02-14 21:01:53,606 DEBUG [Listener at localhost.localdomain/38639] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c1b812c to 127.0.0.1:51069 2023-02-14 21:01:53,606 DEBUG [Listener at localhost.localdomain/38639] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,607 DEBUG [Listener at localhost.localdomain/38639] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-02-14 21:01:53,607 INFO [Listener at localhost.localdomain/38639] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,42689,1676408510213' ***** 2023-02-14 21:01:53,607 INFO [Listener at localhost.localdomain/38639] regionserver.HRegionServer(2310): STOPPED: Shutdown requested 2023-02-14 21:01:53,607 INFO [Listener at localhost.localdomain/38639] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,38623,1676408510252' ***** 2023-02-14 21:01:53,607 INFO [Listener at localhost.localdomain/38639] regionserver.HRegionServer(2310): STOPPED: Shutdown requested 2023-02-14 21:01:53,608 INFO [RS:1;jenkins-hbase12:42689] regionserver.HeapMemoryManager(220): Stopping 2023-02-14 21:01:53,608 INFO [RS:2;jenkins-hbase12:38623] regionserver.HeapMemoryManager(220): Stopping 2023-02-14 21:01:53,608 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-14 21:01:53,608 INFO [RS:2;jenkins-hbase12:38623] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-14 21:01:53,608 INFO [RS:1;jenkins-hbase12:42689] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-14 21:01:53,608 INFO [RS:2;jenkins-hbase12:38623] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-14 21:01:53,608 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-14 21:01:53,609 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,608 INFO [RS:1;jenkins-hbase12:42689] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-14 21:01:53,609 DEBUG [RS:2;jenkins-hbase12:38623] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x081cf657 to 127.0.0.1:51069 2023-02-14 21:01:53,609 DEBUG [RS:2;jenkins-hbase12:38623] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,609 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,38623,1676408510252; all regions closed. 2023-02-14 21:01:53,609 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(3304): Received CLOSE for a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:53,610 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:53,610 DEBUG [RS:1;jenkins-hbase12:42689] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x77e29910 to 127.0.0.1:51069 2023-02-14 21:01:53,610 DEBUG [RS:1;jenkins-hbase12:42689] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing a74fd22b7a0f88c803b87cf527a37448, disabling compactions & flushes 2023-02-14 21:01:53,611 INFO [RS:1;jenkins-hbase12:42689] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-14 21:01:53,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:53,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:53,611 INFO [RS:1;jenkins-hbase12:42689] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-14 21:01:53,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. after waiting 0 ms 2023-02-14 21:01:53,611 INFO [RS:1;jenkins-hbase12:42689] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-14 21:01:53,611 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(3304): Received CLOSE for 1588230740 2023-02-14 21:01:53,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:53,611 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1475): Waiting on 2 regions to close 2023-02-14 21:01:53,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing a74fd22b7a0f88c803b87cf527a37448 1/1 column families, dataSize=78 B heapSize=488 B 2023-02-14 21:01:53,612 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1479): Online Regions={1588230740=hbase:meta,,1.1588230740, a74fd22b7a0f88c803b87cf527a37448=hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448.} 2023-02-14 21:01:53,612 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-14 21:01:53,613 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-14 21:01:53,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-14 21:01:53,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-14 21:01:53,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-14 21:01:53,614 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-02-14 21:01:53,616 DEBUG [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1505): Waiting on 1588230740, a74fd22b7a0f88c803b87cf527a37448 2023-02-14 21:01:53,622 DEBUG [RS:2;jenkins-hbase12:38623] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs 2023-02-14 21:01:53,622 INFO [RS:2;jenkins-hbase12:38623] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C38623%2C1676408510252:(num 1676408511825) 2023-02-14 21:01:53,622 DEBUG [RS:2;jenkins-hbase12:38623] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,622 INFO [RS:2;jenkins-hbase12:38623] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:01:53,622 INFO [RS:2;jenkins-hbase12:38623] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-02-14 21:01:53,622 INFO [RS:2;jenkins-hbase12:38623] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-14 21:01:53,622 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:01:53,622 INFO [RS:2;jenkins-hbase12:38623] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-14 21:01:53,622 INFO [RS:2;jenkins-hbase12:38623] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-14 21:01:53,623 INFO [RS:2;jenkins-hbase12:38623] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:38623 2023-02-14 21:01:53,634 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,634 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,38623,1676408510252 2023-02-14 21:01:53,634 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:53,634 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@30597fc2 rejected from java.util.concurrent.ThreadPoolExecutor@3466ed11[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:53,634 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:53,635 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3b03483a rejected from java.util.concurrent.ThreadPoolExecutor@3466ed11[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:53,675 INFO [regionserver/jenkins-hbase12:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-02-14 21:01:53,675 INFO [regionserver/jenkins-hbase12:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-02-14 21:01:53,675 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:01:53,677 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:01:53,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/.tmp/info/7b3c8e7385e54c968d75ab9dafad1aa8 2023-02-14 21:01:53,694 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/.tmp/info/b8c0672e45a0408f975c3439b20b8a27 2023-02-14 21:01:53,698 INFO [M:0;jenkins-hbase12:43051] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9b766aea8787439aa8a07a68e8eba593 2023-02-14 21:01:53,735 DEBUG [M:0;jenkins-hbase12:43051] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9b766aea8787439aa8a07a68e8eba593 as hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9b766aea8787439aa8a07a68e8eba593 2023-02-14 21:01:53,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/.tmp/info/7b3c8e7385e54c968d75ab9dafad1aa8 as hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/info/7b3c8e7385e54c968d75ab9dafad1aa8 2023-02-14 21:01:53,746 INFO [M:0;jenkins-hbase12:43051] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9b766aea8787439aa8a07a68e8eba593, entries=8, sequenceid=66, filesize=6.3 K 2023-02-14 21:01:53,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/info/7b3c8e7385e54c968d75ab9dafad1aa8, entries=2, sequenceid=6, filesize=4.8 K 2023-02-14 21:01:53,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for a74fd22b7a0f88c803b87cf527a37448 in 138ms, sequenceid=6, compaction requested=false 2023-02-14 21:01:53,749 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegion(2947): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 146ms, sequenceid=66, compaction requested=false 2023-02-14 21:01:53,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-02-14 21:01:53,756 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:53,756 DEBUG [M:0;jenkins-hbase12:43051] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-02-14 21:01:53,764 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:01:53,764 INFO [M:0;jenkins-hbase12:43051] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-02-14 21:01:53,764 INFO [M:0;jenkins-hbase12:43051] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:43051 2023-02-14 21:01:53,766 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/.tmp/table/2c11c5b11c5041879c21b8566e480473 2023-02-14 21:01:53,768 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/namespace/a74fd22b7a0f88c803b87cf527a37448/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-02-14 21:01:53,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:53,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for a74fd22b7a0f88c803b87cf527a37448: 2023-02-14 21:01:53,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1676408512348.a74fd22b7a0f88c803b87cf527a37448. 2023-02-14 21:01:53,776 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/.tmp/info/b8c0672e45a0408f975c3439b20b8a27 as hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/info/b8c0672e45a0408f975c3439b20b8a27 2023-02-14 21:01:53,778 DEBUG [M:0;jenkins-hbase12:43051] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase12.apache.org,43051,1676408508905 already deleted, retry=false 2023-02-14 21:01:53,787 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/info/b8c0672e45a0408f975c3439b20b8a27, entries=10, sequenceid=9, filesize=5.9 K 2023-02-14 21:01:53,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/.tmp/table/2c11c5b11c5041879c21b8566e480473 as hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/table/2c11c5b11c5041879c21b8566e480473 2023-02-14 21:01:53,797 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/table/2c11c5b11c5041879c21b8566e480473, entries=2, sequenceid=9, filesize=4.7 K 2023-02-14 21:01:53,799 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 186ms, sequenceid=9, compaction requested=false 2023-02-14 21:01:53,799 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-02-14 21:01:53,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-02-14 21:01:53,810 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-02-14 21:01:53,810 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-14 21:01:53,810 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 1588230740: 2023-02-14 21:01:53,810 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-02-14 21:01:53,817 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,42689,1676408510213; all regions closed. 2023-02-14 21:01:53,824 DEBUG [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs 2023-02-14 21:01:53,824 INFO [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C42689%2C1676408510213.meta:.meta(num 1676408512003) 2023-02-14 21:01:53,830 DEBUG [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/oldWALs 2023-02-14 21:01:53,831 INFO [RS:1;jenkins-hbase12:42689] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C42689%2C1676408510213:(num 1676408511825) 2023-02-14 21:01:53,831 DEBUG [RS:1;jenkins-hbase12:42689] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:01:53,831 INFO [RS:1;jenkins-hbase12:42689] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:01:53,831 INFO [RS:1;jenkins-hbase12:42689] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-02-14 21:01:53,831 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:01:53,832 INFO [RS:1;jenkins-hbase12:42689] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:42689 2023-02-14 21:01:53,845 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,42689,1676408510213 2023-02-14 21:01:53,845 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2b0f3099 rejected from java.util.concurrent.ThreadPoolExecutor@2f30a120[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,144 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,144 INFO [RS:0;jenkins-hbase12:37197] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,37197,1676408510161; zookeeper connection closed. 2023-02-14 21:01:54,144 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@395faf09 rejected from java.util.concurrent.ThreadPoolExecutor@6cc6cd66[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,145 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:37197-0x10163479a3f0001, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,145 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19053c98] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19053c98 2023-02-14 21:01:54,145 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@59546b79 rejected from java.util.concurrent.ThreadPoolExecutor@6cc6cd66[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,244 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,244 INFO [RS:1;jenkins-hbase12:42689] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,42689,1676408510213; zookeeper connection closed. 2023-02-14 21:01:54,245 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@9cbf7fe rejected from java.util.concurrent.ThreadPoolExecutor@2f30a120[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,245 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@10ed62e1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@10ed62e1 2023-02-14 21:01:54,245 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:42689-0x10163479a3f0002, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,246 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@71810655 rejected from java.util.concurrent.ThreadPoolExecutor@2f30a120[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,345 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,345 INFO [M:0;jenkins-hbase12:43051] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,43051,1676408508905; zookeeper connection closed. 2023-02-14 21:01:54,345 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2ebdaf20 rejected from java.util.concurrent.ThreadPoolExecutor@214a1079[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 24] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,346 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): master:43051-0x10163479a3f0000, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,346 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@8713861 rejected from java.util.concurrent.ThreadPoolExecutor@214a1079[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 24] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,445 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,446 INFO [RS:2;jenkins-hbase12:38623] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,38623,1676408510252; zookeeper connection closed. 2023-02-14 21:01:54,446 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@4352e498 rejected from java.util.concurrent.ThreadPoolExecutor@3466ed11[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,446 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@30e07237] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@30e07237 2023-02-14 21:01:54,446 DEBUG [Listener at localhost.localdomain/38639-EventThread] zookeeper.ZKWatcher(600): regionserver:38623-0x10163479a3f0003, quorum=127.0.0.1:51069, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:01:54,447 ERROR [Listener at localhost.localdomain/38639-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3028523d rejected from java.util.concurrent.ThreadPoolExecutor@3466ed11[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:01:54,447 INFO [Listener at localhost.localdomain/38639] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-02-14 21:01:54,450 WARN [Listener at localhost.localdomain/38639] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-14 21:01:54,490 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-14 21:01:54,597 WARN [BP-593960752-136.243.104.168-1676408504254 heartbeating to localhost.localdomain/127.0.0.1:40959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-14 21:01:54,597 WARN [BP-593960752-136.243.104.168-1676408504254 heartbeating to localhost.localdomain/127.0.0.1:40959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-593960752-136.243.104.168-1676408504254 (Datanode Uuid 5a2e490c-60b6-48d5-8f8d-bbb4a086af3e) service to localhost.localdomain/127.0.0.1:40959 2023-02-14 21:01:54,601 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/dfs/data/data5/current/BP-593960752-136.243.104.168-1676408504254] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:01:54,601 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/dfs/data/data6/current/BP-593960752-136.243.104.168-1676408504254] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:01:54,604 WARN [Listener at localhost.localdomain/38639] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-14 21:01:54,607 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-14 21:01:54,711 WARN [BP-593960752-136.243.104.168-1676408504254 heartbeating to localhost.localdomain/127.0.0.1:40959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-14 21:01:54,712 WARN [BP-593960752-136.243.104.168-1676408504254 heartbeating to localhost.localdomain/127.0.0.1:40959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-593960752-136.243.104.168-1676408504254 (Datanode Uuid 7afee2d3-f09e-43e2-ac6a-b6be431c5702) service to localhost.localdomain/127.0.0.1:40959 2023-02-14 21:01:54,713 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/dfs/data/data3/current/BP-593960752-136.243.104.168-1676408504254] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:01:54,714 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/dfs/data/data4/current/BP-593960752-136.243.104.168-1676408504254] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:01:54,717 WARN [Listener at localhost.localdomain/38639] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-14 21:01:54,720 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-14 21:01:54,823 WARN [BP-593960752-136.243.104.168-1676408504254 heartbeating to localhost.localdomain/127.0.0.1:40959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-14 21:01:54,823 WARN [BP-593960752-136.243.104.168-1676408504254 heartbeating to localhost.localdomain/127.0.0.1:40959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-593960752-136.243.104.168-1676408504254 (Datanode Uuid dbba8f00-6bd9-4753-bf9d-fb1cb1ebab05) service to localhost.localdomain/127.0.0.1:40959 2023-02-14 21:01:54,825 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/dfs/data/data1/current/BP-593960752-136.243.104.168-1676408504254] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:01:54,826 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/cluster_6dc3721e-d3ec-6e0d-bdff-2016947e8490/dfs/data/data2/current/BP-593960752-136.243.104.168-1676408504254] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:01:54,853 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-02-14 21:01:54,974 INFO [Listener at localhost.localdomain/38639] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-02-14 21:01:55,010 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-02-14 21:01:55,021 INFO [Listener at localhost.localdomain/38639] hbase.ResourceChecker(175): after: client.TestAsyncClusterAdminApi2#testStop Thread=82 (was 8) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:40959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase12:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:40959 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:40959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:40959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:40959 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-6-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:40959 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:40959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-7-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase12:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase12:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@3fd147fc java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-6-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReplicationExecutor-0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:703) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-7-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'NameNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReplicationExecutor-0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:703) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase12:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:40959 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/38639 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:40959 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-6-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-7-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase12:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) - Thread LEAK? -, OpenFileDescriptor=497 (was 260) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=291 (was 288) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=5083 (was 5684) 2023-02-14 21:01:55,031 INFO [Listener at localhost.localdomain/38639] hbase.ResourceChecker(147): before: client.TestAsyncClusterAdminApi2#testShutdown Thread=82, OpenFileDescriptor=497, MaxFileDescriptor=60000, SystemLoadAverage=291, ProcessCount=170, AvailableMemoryMB=5082 2023-02-14 21:01:55,031 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/hadoop.log.dir so I do NOT create it in target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/586b079e-b06c-e7f3-d70a-dcbad97c50d6/hadoop.tmp.dir so I do NOT create it in target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07, deleteOnExit=true 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/test.cache.data in system properties and HBase conf 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/hadoop.tmp.dir in system properties and HBase conf 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/hadoop.log.dir in system properties and HBase conf 2023-02-14 21:01:55,032 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/mapreduce.cluster.local.dir in system properties and HBase conf 2023-02-14 21:01:55,033 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-02-14 21:01:55,033 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-02-14 21:01:55,033 DEBUG [Listener at localhost.localdomain/38639] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-02-14 21:01:55,033 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-02-14 21:01:55,033 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-02-14 21:01:55,033 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-02-14 21:01:55,033 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/nfs.dump.dir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/java.io.tmpdir in system properties and HBase conf 2023-02-14 21:01:55,034 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-14 21:01:55,035 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-02-14 21:01:55,035 INFO [Listener at localhost.localdomain/38639] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-02-14 21:01:55,038 WARN [Listener at localhost.localdomain/38639] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-14 21:01:55,038 WARN [Listener at localhost.localdomain/38639] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-14 21:01:55,483 WARN [Listener at localhost.localdomain/38639] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:55,486 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:55,492 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/java.io.tmpdir/Jetty_localhost_localdomain_37995_hdfs____.p2fvn9/webapp 2023-02-14 21:01:55,567 INFO [Listener at localhost.localdomain/38639] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37995 2023-02-14 21:01:55,569 WARN [Listener at localhost.localdomain/38639] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-14 21:01:55,570 WARN [Listener at localhost.localdomain/38639] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-14 21:01:55,786 WARN [Listener at localhost.localdomain/35245] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:55,843 WARN [Listener at localhost.localdomain/35245] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-14 21:01:55,846 WARN [Listener at localhost.localdomain/35245] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:55,847 INFO [Listener at localhost.localdomain/35245] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:55,852 INFO [Listener at localhost.localdomain/35245] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/java.io.tmpdir/Jetty_localhost_34519_datanode____tm3ev7/webapp 2023-02-14 21:01:55,925 INFO [Listener at localhost.localdomain/35245] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34519 2023-02-14 21:01:55,932 WARN [Listener at localhost.localdomain/41833] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:55,944 WARN [Listener at localhost.localdomain/41833] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-14 21:01:55,946 WARN [Listener at localhost.localdomain/41833] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:55,947 INFO [Listener at localhost.localdomain/41833] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:55,951 INFO [Listener at localhost.localdomain/41833] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/java.io.tmpdir/Jetty_localhost_39969_datanode____.ppkcjn/webapp 2023-02-14 21:01:56,028 INFO [Listener at localhost.localdomain/41833] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39969 2023-02-14 21:01:56,035 WARN [Listener at localhost.localdomain/37469] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:56,045 WARN [Listener at localhost.localdomain/37469] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-14 21:01:56,047 WARN [Listener at localhost.localdomain/37469] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-14 21:01:56,049 INFO [Listener at localhost.localdomain/37469] log.Slf4jLog(67): jetty-6.1.26 2023-02-14 21:01:56,055 INFO [Listener at localhost.localdomain/37469] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/java.io.tmpdir/Jetty_localhost_38943_datanode____skhsgq/webapp 2023-02-14 21:01:56,129 INFO [Listener at localhost.localdomain/37469] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38943 2023-02-14 21:01:56,136 WARN [Listener at localhost.localdomain/46685] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-14 21:01:57,352 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf77e42592170dce5: Processing first storage report for DS-ca50d022-0459-4bdc-a5ec-fde8fda75278 from datanode 113f4b12-2e6b-4fb5-b3fa-903d698b0296 2023-02-14 21:01:57,353 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf77e42592170dce5: from storage DS-ca50d022-0459-4bdc-a5ec-fde8fda75278 node DatanodeRegistration(127.0.0.1:39473, datanodeUuid=113f4b12-2e6b-4fb5-b3fa-903d698b0296, infoPort=42375, infoSecurePort=0, ipcPort=41833, storageInfo=lv=-57;cid=testClusterID;nsid=2006867923;c=1676408515040), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:57,353 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf77e42592170dce5: Processing first storage report for DS-8865a331-e1e5-431d-97b5-f7de53965863 from datanode 113f4b12-2e6b-4fb5-b3fa-903d698b0296 2023-02-14 21:01:57,353 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf77e42592170dce5: from storage DS-8865a331-e1e5-431d-97b5-f7de53965863 node DatanodeRegistration(127.0.0.1:39473, datanodeUuid=113f4b12-2e6b-4fb5-b3fa-903d698b0296, infoPort=42375, infoSecurePort=0, ipcPort=41833, storageInfo=lv=-57;cid=testClusterID;nsid=2006867923;c=1676408515040), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:57,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe81a4b6c3e6d5e7: Processing first storage report for DS-29565ff7-2000-4cd4-a62b-e55888461b26 from datanode 97f2e3f1-205b-42b3-8064-dffb9b3c2d6d 2023-02-14 21:01:57,543 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe81a4b6c3e6d5e7: from storage DS-29565ff7-2000-4cd4-a62b-e55888461b26 node DatanodeRegistration(127.0.0.1:42467, datanodeUuid=97f2e3f1-205b-42b3-8064-dffb9b3c2d6d, infoPort=33541, infoSecurePort=0, ipcPort=37469, storageInfo=lv=-57;cid=testClusterID;nsid=2006867923;c=1676408515040), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:57,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe81a4b6c3e6d5e7: Processing first storage report for DS-14d499a5-4ba0-4766-a35f-b3c1becda61b from datanode 97f2e3f1-205b-42b3-8064-dffb9b3c2d6d 2023-02-14 21:01:57,543 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe81a4b6c3e6d5e7: from storage DS-14d499a5-4ba0-4766-a35f-b3c1becda61b node DatanodeRegistration(127.0.0.1:42467, datanodeUuid=97f2e3f1-205b-42b3-8064-dffb9b3c2d6d, infoPort=33541, infoSecurePort=0, ipcPort=37469, storageInfo=lv=-57;cid=testClusterID;nsid=2006867923;c=1676408515040), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:57,571 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-02-14 21:01:57,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1df685183189f651: Processing first storage report for DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b from datanode 232cf55c-0418-4fb9-8ec0-fbd6e0686cdf 2023-02-14 21:01:57,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1df685183189f651: from storage DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b node DatanodeRegistration(127.0.0.1:33583, datanodeUuid=232cf55c-0418-4fb9-8ec0-fbd6e0686cdf, infoPort=38429, infoSecurePort=0, ipcPort=46685, storageInfo=lv=-57;cid=testClusterID;nsid=2006867923;c=1676408515040), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:57,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1df685183189f651: Processing first storage report for DS-e0a5224e-dc81-4924-ab8c-f479393866de from datanode 232cf55c-0418-4fb9-8ec0-fbd6e0686cdf 2023-02-14 21:01:57,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1df685183189f651: from storage DS-e0a5224e-dc81-4924-ab8c-f479393866de node DatanodeRegistration(127.0.0.1:33583, datanodeUuid=232cf55c-0418-4fb9-8ec0-fbd6e0686cdf, infoPort=38429, infoSecurePort=0, ipcPort=46685, storageInfo=lv=-57;cid=testClusterID;nsid=2006867923;c=1676408515040), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-14 21:01:57,767 DEBUG [Listener at localhost.localdomain/46685] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef 2023-02-14 21:01:57,770 INFO [Listener at localhost.localdomain/46685] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/zookeeper_0, clientPort=53584, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-02-14 21:01:57,772 INFO [Listener at localhost.localdomain/46685] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53584 2023-02-14 21:01:57,772 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:57,773 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:57,794 INFO [Listener at localhost.localdomain/46685] util.FSUtils(479): Created version file at hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574 with version=8 2023-02-14 21:01:57,794 INFO [Listener at localhost.localdomain/46685] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:40959/user/jenkins/test-data/6c097184-8624-82b7-1bac-d3585f61a49e/hbase-staging 2023-02-14 21:01:57,796 INFO [Listener at localhost.localdomain/46685] client.ConnectionUtils(127): master/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:57,796 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:57,796 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:57,796 INFO [Listener at localhost.localdomain/46685] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:57,796 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:57,797 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:57,797 INFO [Listener at localhost.localdomain/46685] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:57,799 INFO [Listener at localhost.localdomain/46685] ipc.NettyRpcServer(120): Bind to /136.243.104.168:39877 2023-02-14 21:01:57,800 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:57,801 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:57,802 INFO [Listener at localhost.localdomain/46685] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39877 connecting to ZooKeeper ensemble=127.0.0.1:53584 2023-02-14 21:01:57,867 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:398770x0, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:57,870 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): master:39877-0x1016347bff40000 connected 2023-02-14 21:01:57,961 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:57,962 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:57,963 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:57,964 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39877 2023-02-14 21:01:57,964 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39877 2023-02-14 21:01:57,964 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39877 2023-02-14 21:01:57,965 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39877 2023-02-14 21:01:57,966 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39877 2023-02-14 21:01:57,966 INFO [Listener at localhost.localdomain/46685] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574, hbase.cluster.distributed=false 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:57,984 INFO [Listener at localhost.localdomain/46685] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:57,986 INFO [Listener at localhost.localdomain/46685] ipc.NettyRpcServer(120): Bind to /136.243.104.168:33857 2023-02-14 21:01:57,987 INFO [Listener at localhost.localdomain/46685] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-14 21:01:57,988 DEBUG [Listener at localhost.localdomain/46685] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-14 21:01:57,988 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:57,990 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:57,991 INFO [Listener at localhost.localdomain/46685] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33857 connecting to ZooKeeper ensemble=127.0.0.1:53584 2023-02-14 21:01:58,002 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:338570x0, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:58,004 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:338570x0, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:58,004 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:33857-0x1016347bff40001 connected 2023-02-14 21:01:58,005 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:58,005 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:58,006 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33857 2023-02-14 21:01:58,008 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33857 2023-02-14 21:01:58,009 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33857 2023-02-14 21:01:58,012 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33857 2023-02-14 21:01:58,013 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33857 2023-02-14 21:01:58,022 INFO [Listener at localhost.localdomain/46685] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:58,022 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:58,023 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:58,023 INFO [Listener at localhost.localdomain/46685] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:58,023 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:58,023 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:58,023 INFO [Listener at localhost.localdomain/46685] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:58,025 INFO [Listener at localhost.localdomain/46685] ipc.NettyRpcServer(120): Bind to /136.243.104.168:42465 2023-02-14 21:01:58,025 INFO [Listener at localhost.localdomain/46685] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-14 21:01:58,026 DEBUG [Listener at localhost.localdomain/46685] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-14 21:01:58,027 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:58,027 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:58,028 INFO [Listener at localhost.localdomain/46685] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42465 connecting to ZooKeeper ensemble=127.0.0.1:53584 2023-02-14 21:01:58,041 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:424650x0, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:58,043 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:424650x0, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:58,043 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:42465-0x1016347bff40002 connected 2023-02-14 21:01:58,044 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:58,045 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:58,047 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42465 2023-02-14 21:01:58,048 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42465 2023-02-14 21:01:58,048 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42465 2023-02-14 21:01:58,048 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42465 2023-02-14 21:01:58,048 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42465 2023-02-14 21:01:58,057 INFO [Listener at localhost.localdomain/46685] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-14 21:01:58,057 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:58,058 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:58,058 INFO [Listener at localhost.localdomain/46685] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-14 21:01:58,058 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-14 21:01:58,058 INFO [Listener at localhost.localdomain/46685] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-14 21:01:58,058 INFO [Listener at localhost.localdomain/46685] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-14 21:01:58,059 INFO [Listener at localhost.localdomain/46685] ipc.NettyRpcServer(120): Bind to /136.243.104.168:46837 2023-02-14 21:01:58,059 INFO [Listener at localhost.localdomain/46685] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-14 21:01:58,060 DEBUG [Listener at localhost.localdomain/46685] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-14 21:01:58,061 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:58,062 INFO [Listener at localhost.localdomain/46685] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:58,063 INFO [Listener at localhost.localdomain/46685] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46837 connecting to ZooKeeper ensemble=127.0.0.1:53584 2023-02-14 21:01:58,076 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:468370x0, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-14 21:01:58,077 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:46837-0x1016347bff40003 connected 2023-02-14 21:01:58,077 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:01:58,078 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:01:58,079 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ZKUtil(164): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-14 21:01:58,079 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46837 2023-02-14 21:01:58,079 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46837 2023-02-14 21:01:58,080 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46837 2023-02-14 21:01:58,080 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46837 2023-02-14 21:01:58,081 DEBUG [Listener at localhost.localdomain/46685] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46837 2023-02-14 21:01:58,085 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,097 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-14 21:01:58,097 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:58,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:58,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:58,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-14 21:01:58,109 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,110 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-14 21:01:58,111 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase12.apache.org,39877,1676408517795 from backup master directory 2023-02-14 21:01:58,111 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-14 21:01:58,118 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,118 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-14 21:01:58,118 WARN [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:58,118 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,144 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] util.FSUtils(628): Created cluster ID file at hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/hbase.id with ID: ec6bf592-862b-4cc8-b258-0ab927cfe530 2023-02-14 21:01:58,161 INFO [master/jenkins-hbase12:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-14 21:01:58,170 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,186 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7fc748c4 to 127.0.0.1:53584 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:58,200 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@744d0057, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:58,200 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-02-14 21:01:58,201 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-02-14 21:01:58,201 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:58,204 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7689): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store-tmp 2023-02-14 21:01:58,218 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:58,218 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-14 21:01:58,218 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:58,218 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:58,218 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-14 21:01:58,218 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:58,219 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:01:58,219 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-02-14 21:01:58,219 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/WALs/jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,223 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C39877%2C1676408517795, suffix=, logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/WALs/jenkins-hbase12.apache.org,39877,1676408517795, archiveDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/oldWALs, maxLogs=10 2023-02-14 21:01:58,238 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK] 2023-02-14 21:01:58,239 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK] 2023-02-14 21:01:58,239 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK] 2023-02-14 21:01:58,242 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/WALs/jenkins-hbase12.apache.org,39877,1676408517795/jenkins-hbase12.apache.org%2C39877%2C1676408517795.1676408518223 2023-02-14 21:01:58,242 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK], DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK], DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK]] 2023-02-14 21:01:58,242 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7850): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-02-14 21:01:58,242 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:58,242 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7890): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:58,242 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7893): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:58,245 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:58,247 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-02-14 21:01:58,247 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-02-14 21:01:58,248 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,249 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:58,251 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:58,255 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1054): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-02-14 21:01:58,258 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-14 21:01:58,258 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1071): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=69930571, jitterRate=0.04204671084880829}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-14 21:01:58,259 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(964): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-02-14 21:01:58,260 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-02-14 21:01:58,262 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-02-14 21:01:58,262 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-02-14 21:01:58,262 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-02-14 21:01:58,263 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-02-14 21:01:58,263 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-02-14 21:01:58,263 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-02-14 21:01:58,264 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-02-14 21:01:58,265 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-02-14 21:01:58,273 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-02-14 21:01:58,273 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-02-14 21:01:58,274 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-02-14 21:01:58,274 INFO [master/jenkins-hbase12:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-02-14 21:01:58,274 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-02-14 21:01:58,286 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,287 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-02-14 21:01:58,288 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-02-14 21:01:58,289 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-02-14 21:01:58,297 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:58,297 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:58,297 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:58,297 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-14 21:01:58,297 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,298 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase12.apache.org,39877,1676408517795, sessionid=0x1016347bff40000, setting cluster-up flag (Was=false) 2023-02-14 21:01:58,318 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,350 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-02-14 21:01:58,354 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,381 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,413 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-02-14 21:01:58,416 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:01:58,417 WARN [master/jenkins-hbase12:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/.hbase-snapshot/.tmp 2023-02-14 21:01:58,422 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-02-14 21:01:58,422 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:58,422 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:58,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:58,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-14 21:01:58,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase12:0, corePoolSize=10, maxPoolSize=10 2023-02-14 21:01:58,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:58,423 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,425 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1676408548425 2023-02-14 21:01:58,426 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-02-14 21:01:58,426 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-02-14 21:01:58,426 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-02-14 21:01:58,426 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-02-14 21:01:58,427 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-02-14 21:01:58,427 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-02-14 21:01:58,427 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,428 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-02-14 21:01:58,428 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-02-14 21:01:58,429 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-02-14 21:01:58,429 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-02-14 21:01:58,429 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-02-14 21:01:58,431 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-02-14 21:01:58,431 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-02-14 21:01:58,432 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1676408518431,5,FailOnTimeoutGroup] 2023-02-14 21:01:58,432 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1676408518432,5,FailOnTimeoutGroup] 2023-02-14 21:01:58,432 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,432 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-02-14 21:01:58,432 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,432 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,434 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-14 21:01:58,451 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-14 21:01:58,452 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-14 21:01:58,452 INFO [PEWorker-1] regionserver.HRegion(7671): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574 2023-02-14 21:01:58,465 DEBUG [PEWorker-1] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:58,467 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-14 21:01:58,469 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/info 2023-02-14 21:01:58,470 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-14 21:01:58,470 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,470 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-14 21:01:58,472 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/rep_barrier 2023-02-14 21:01:58,472 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-14 21:01:58,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-14 21:01:58,474 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/table 2023-02-14 21:01:58,475 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-14 21:01:58,475 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,476 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740 2023-02-14 21:01:58,477 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740 2023-02-14 21:01:58,479 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-14 21:01:58,481 DEBUG [PEWorker-1] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-14 21:01:58,483 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(952): ClusterId : ec6bf592-862b-4cc8-b258-0ab927cfe530 2023-02-14 21:01:58,483 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(952): ClusterId : ec6bf592-862b-4cc8-b258-0ab927cfe530 2023-02-14 21:01:58,483 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(952): ClusterId : ec6bf592-862b-4cc8-b258-0ab927cfe530 2023-02-14 21:01:58,485 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-14 21:01:58,485 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-14 21:01:58,484 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-14 21:01:58,485 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-14 21:01:58,486 INFO [PEWorker-1] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=70732949, jitterRate=0.05400307476520538}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-14 21:01:58,486 DEBUG [PEWorker-1] regionserver.HRegion(964): Region open journal for 1588230740: 2023-02-14 21:01:58,486 DEBUG [PEWorker-1] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-14 21:01:58,487 INFO [PEWorker-1] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-14 21:01:58,487 DEBUG [PEWorker-1] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-14 21:01:58,487 DEBUG [PEWorker-1] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-14 21:01:58,487 DEBUG [PEWorker-1] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-14 21:01:58,487 INFO [PEWorker-1] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-14 21:01:58,487 DEBUG [PEWorker-1] regionserver.HRegion(1557): Region close journal for 1588230740: 2023-02-14 21:01:58,488 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-02-14 21:01:58,488 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-02-14 21:01:58,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-02-14 21:01:58,490 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-02-14 21:01:58,492 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-02-14 21:01:58,508 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-14 21:01:58,508 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-14 21:01:58,508 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-14 21:01:58,508 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-14 21:01:58,508 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-14 21:01:58,508 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-14 21:01:58,530 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-14 21:01:58,532 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-14 21:01:58,532 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-14 21:01:58,535 DEBUG [RS:1;jenkins-hbase12:42465] zookeeper.ReadOnlyZKClient(139): Connect 0x4ff25158 to 127.0.0.1:53584 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:58,535 DEBUG [RS:0;jenkins-hbase12:33857] zookeeper.ReadOnlyZKClient(139): Connect 0x3725ec51 to 127.0.0.1:53584 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:58,535 DEBUG [RS:2;jenkins-hbase12:46837] zookeeper.ReadOnlyZKClient(139): Connect 0x656ca820 to 127.0.0.1:53584 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:01:58,551 DEBUG [RS:1;jenkins-hbase12:42465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4135a6d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:58,551 DEBUG [RS:0;jenkins-hbase12:33857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39e204c5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:58,551 DEBUG [RS:2;jenkins-hbase12:46837] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61533055, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:01:58,551 DEBUG [RS:1;jenkins-hbase12:42465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@450a3a28, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:58,552 DEBUG [RS:0;jenkins-hbase12:33857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78df0efe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:58,552 DEBUG [RS:2;jenkins-hbase12:46837] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1512b7c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:01:58,563 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase12:42465 2023-02-14 21:01:58,563 INFO [RS:1;jenkins-hbase12:42465] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-14 21:01:58,563 INFO [RS:1;jenkins-hbase12:42465] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-14 21:01:58,563 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1023): About to register with Master. 2023-02-14 21:01:58,564 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,39877,1676408517795 with isa=jenkins-hbase12.apache.org/136.243.104.168:42465, startcode=1676408518022 2023-02-14 21:01:58,564 DEBUG [RS:1;jenkins-hbase12:42465] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-14 21:01:58,567 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:34379, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-02-14 21:01:58,568 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase12:33857 2023-02-14 21:01:58,568 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase12:46837 2023-02-14 21:01:58,568 INFO [RS:0;jenkins-hbase12:33857] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-14 21:01:58,568 INFO [RS:0;jenkins-hbase12:33857] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-14 21:01:58,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39877] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,568 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1023): About to register with Master. 2023-02-14 21:01:58,568 INFO [RS:2;jenkins-hbase12:46837] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-14 21:01:58,569 INFO [RS:2;jenkins-hbase12:46837] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-14 21:01:58,569 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1023): About to register with Master. 2023-02-14 21:01:58,569 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574 2023-02-14 21:01:58,569 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35245 2023-02-14 21:01:58,569 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-14 21:01:58,569 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,39877,1676408517795 with isa=jenkins-hbase12.apache.org/136.243.104.168:46837, startcode=1676408518057 2023-02-14 21:01:58,569 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,39877,1676408517795 with isa=jenkins-hbase12.apache.org/136.243.104.168:33857, startcode=1676408517983 2023-02-14 21:01:58,569 DEBUG [RS:0;jenkins-hbase12:33857] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-14 21:01:58,569 DEBUG [RS:2;jenkins-hbase12:46837] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-14 21:01:58,572 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:49735, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-02-14 21:01:58,572 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:57465, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-02-14 21:01:58,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39877] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39877] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,574 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574 2023-02-14 21:01:58,574 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574 2023-02-14 21:01:58,574 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35245 2023-02-14 21:01:58,574 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35245 2023-02-14 21:01:58,574 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-14 21:01:58,574 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-14 21:01:58,581 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:01:58,624 DEBUG [RS:1;jenkins-hbase12:42465] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,624 WARN [RS:1;jenkins-hbase12:42465] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:58,624 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,46837,1676408518057] 2023-02-14 21:01:58,625 DEBUG [RS:0;jenkins-hbase12:33857] zookeeper.ZKUtil(162): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,624 INFO [RS:1;jenkins-hbase12:42465] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:58,625 WARN [RS:0;jenkins-hbase12:33857] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:58,625 DEBUG [RS:2;jenkins-hbase12:46837] zookeeper.ZKUtil(162): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,625 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,625 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,33857,1676408517983] 2023-02-14 21:01:58,625 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,42465,1676408518022] 2023-02-14 21:01:58,625 WARN [RS:2;jenkins-hbase12:46837] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-14 21:01:58,625 INFO [RS:0;jenkins-hbase12:33857] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:58,625 INFO [RS:2;jenkins-hbase12:46837] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:58,625 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,625 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,635 DEBUG [RS:1;jenkins-hbase12:42465] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,635 DEBUG [RS:0;jenkins-hbase12:33857] zookeeper.ZKUtil(162): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,635 DEBUG [RS:2;jenkins-hbase12:46837] zookeeper.ZKUtil(162): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,635 DEBUG [RS:1;jenkins-hbase12:42465] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,636 DEBUG [RS:0;jenkins-hbase12:33857] zookeeper.ZKUtil(162): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,636 DEBUG [RS:2;jenkins-hbase12:46837] zookeeper.ZKUtil(162): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,636 DEBUG [RS:1;jenkins-hbase12:42465] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,636 DEBUG [RS:0;jenkins-hbase12:33857] zookeeper.ZKUtil(162): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,636 DEBUG [RS:2;jenkins-hbase12:46837] zookeeper.ZKUtil(162): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,637 DEBUG [RS:0;jenkins-hbase12:33857] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-14 21:01:58,637 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-14 21:01:58,637 INFO [RS:0;jenkins-hbase12:33857] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-14 21:01:58,637 INFO [RS:1;jenkins-hbase12:42465] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-14 21:01:58,637 DEBUG [RS:2;jenkins-hbase12:46837] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-14 21:01:58,638 INFO [RS:2;jenkins-hbase12:46837] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-14 21:01:58,638 INFO [RS:0;jenkins-hbase12:33857] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-14 21:01:58,640 INFO [RS:0;jenkins-hbase12:33857] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-14 21:01:58,640 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,641 INFO [RS:1;jenkins-hbase12:42465] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-14 21:01:58,641 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-14 21:01:58,645 DEBUG [jenkins-hbase12:39877] assignment.AssignmentManager(2178): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-02-14 21:01:58,645 DEBUG [jenkins-hbase12:39877] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-14 21:01:58,645 INFO [RS:1;jenkins-hbase12:42465] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-14 21:01:58,646 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,646 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-14 21:01:58,646 INFO [RS:2;jenkins-hbase12:46837] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-14 21:01:58,649 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,649 INFO [RS:2;jenkins-hbase12:46837] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-14 21:01:58,649 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,649 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,650 DEBUG [jenkins-hbase12:39877] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-14 21:01:58,650 DEBUG [jenkins-hbase12:39877] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,650 DEBUG [jenkins-hbase12:39877] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:58,650 DEBUG [jenkins-hbase12:39877] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,650 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,651 DEBUG [RS:0;jenkins-hbase12:33857] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,653 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-14 21:01:58,657 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,653 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,42465,1676408518022, state=OPENING 2023-02-14 21:01:58,653 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,657 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,658 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,658 DEBUG [RS:1;jenkins-hbase12:42465] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,665 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,665 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,666 DEBUG [RS:2;jenkins-hbase12:46837] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-14 21:01:58,672 INFO [RS:0;jenkins-hbase12:33857] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-14 21:01:58,672 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,33857,1676408517983-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,673 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,673 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,673 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,676 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-02-14 21:01:58,682 INFO [RS:1;jenkins-hbase12:42465] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-14 21:01:58,682 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42465,1676408518022-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,682 INFO [RS:2;jenkins-hbase12:46837] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-14 21:01:58,683 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,46837,1676408518057-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:58,686 INFO [RS:0;jenkins-hbase12:33857] regionserver.Replication(203): jenkins-hbase12.apache.org,33857,1676408517983 started 2023-02-14 21:01:58,686 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,33857,1676408517983, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:33857, sessionid=0x1016347bff40001 2023-02-14 21:01:58,686 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-14 21:01:58,686 DEBUG [RS:0;jenkins-hbase12:33857] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,686 DEBUG [RS:0;jenkins-hbase12:33857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,33857,1676408517983' 2023-02-14 21:01:58,686 DEBUG [RS:0;jenkins-hbase12:33857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-14 21:01:58,686 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:58,687 DEBUG [RS:0;jenkins-hbase12:33857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-14 21:01:58,687 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-14 21:01:58,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,42465,1676408518022}] 2023-02-14 21:01:58,687 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-14 21:01:58,687 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-14 21:01:58,687 DEBUG [RS:0;jenkins-hbase12:33857] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:01:58,688 DEBUG [RS:0;jenkins-hbase12:33857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,33857,1676408517983' 2023-02-14 21:01:58,688 DEBUG [RS:0;jenkins-hbase12:33857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-14 21:01:58,689 DEBUG [RS:0;jenkins-hbase12:33857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-14 21:01:58,690 DEBUG [RS:0;jenkins-hbase12:33857] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-14 21:01:58,690 INFO [RS:0;jenkins-hbase12:33857] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-14 21:01:58,690 INFO [RS:0;jenkins-hbase12:33857] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-14 21:01:58,700 INFO [RS:2;jenkins-hbase12:46837] regionserver.Replication(203): jenkins-hbase12.apache.org,46837,1676408518057 started 2023-02-14 21:01:58,700 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,46837,1676408518057, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:46837, sessionid=0x1016347bff40003 2023-02-14 21:01:58,700 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-14 21:01:58,700 DEBUG [RS:2;jenkins-hbase12:46837] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,700 DEBUG [RS:2;jenkins-hbase12:46837] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,46837,1676408518057' 2023-02-14 21:01:58,700 DEBUG [RS:2;jenkins-hbase12:46837] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-14 21:01:58,701 DEBUG [RS:2;jenkins-hbase12:46837] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-14 21:01:58,701 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-14 21:01:58,701 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-14 21:01:58,701 DEBUG [RS:2;jenkins-hbase12:46837] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:01:58,701 DEBUG [RS:2;jenkins-hbase12:46837] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,46837,1676408518057' 2023-02-14 21:01:58,701 DEBUG [RS:2;jenkins-hbase12:46837] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-14 21:01:58,702 DEBUG [RS:2;jenkins-hbase12:46837] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-14 21:01:58,702 DEBUG [RS:2;jenkins-hbase12:46837] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-14 21:01:58,702 INFO [RS:2;jenkins-hbase12:46837] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-14 21:01:58,702 INFO [RS:2;jenkins-hbase12:46837] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-14 21:01:58,705 INFO [RS:1;jenkins-hbase12:42465] regionserver.Replication(203): jenkins-hbase12.apache.org,42465,1676408518022 started 2023-02-14 21:01:58,705 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,42465,1676408518022, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:42465, sessionid=0x1016347bff40002 2023-02-14 21:01:58,705 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-14 21:01:58,705 DEBUG [RS:1;jenkins-hbase12:42465] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,705 DEBUG [RS:1;jenkins-hbase12:42465] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,42465,1676408518022' 2023-02-14 21:01:58,705 DEBUG [RS:1;jenkins-hbase12:42465] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-14 21:01:58,705 DEBUG [RS:1;jenkins-hbase12:42465] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,42465,1676408518022' 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-14 21:01:58,706 DEBUG [RS:1;jenkins-hbase12:42465] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-14 21:01:58,706 INFO [RS:1;jenkins-hbase12:42465] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-14 21:01:58,706 INFO [RS:1;jenkins-hbase12:42465] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-14 21:01:58,793 INFO [RS:0;jenkins-hbase12:33857] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C33857%2C1676408517983, suffix=, logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,33857,1676408517983, archiveDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs, maxLogs=32 2023-02-14 21:01:58,806 INFO [RS:2;jenkins-hbase12:46837] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C46837%2C1676408518057, suffix=, logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,46837,1676408518057, archiveDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs, maxLogs=32 2023-02-14 21:01:58,812 INFO [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C42465%2C1676408518022, suffix=, logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,42465,1676408518022, archiveDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs, maxLogs=32 2023-02-14 21:01:58,817 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK] 2023-02-14 21:01:58,817 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK] 2023-02-14 21:01:58,821 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK] 2023-02-14 21:01:58,825 INFO [RS:0;jenkins-hbase12:33857] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,33857,1676408517983/jenkins-hbase12.apache.org%2C33857%2C1676408517983.1676408518794 2023-02-14 21:01:58,827 DEBUG [RS:0;jenkins-hbase12:33857] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK], DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK], DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK]] 2023-02-14 21:01:58,839 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK] 2023-02-14 21:01:58,839 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK] 2023-02-14 21:01:58,839 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK] 2023-02-14 21:01:58,845 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK] 2023-02-14 21:01:58,845 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK] 2023-02-14 21:01:58,845 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK] 2023-02-14 21:01:58,846 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:58,847 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-14 21:01:58,847 INFO [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,42465,1676408518022/jenkins-hbase12.apache.org%2C42465%2C1676408518022.1676408518817 2023-02-14 21:01:58,848 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:59284, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-14 21:01:58,849 DEBUG [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK], DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK], DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK]] 2023-02-14 21:01:58,849 INFO [RS:2;jenkins-hbase12:46837] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,46837,1676408518057/jenkins-hbase12.apache.org%2C46837%2C1676408518057.1676408518810 2023-02-14 21:01:58,849 DEBUG [RS:2;jenkins-hbase12:46837] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK], DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK], DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK]] 2023-02-14 21:01:58,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:meta,,1.1588230740 2023-02-14 21:01:58,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-14 21:01:58,857 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C42465%2C1676408518022.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,42465,1676408518022, archiveDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs, maxLogs=32 2023-02-14 21:01:58,872 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK] 2023-02-14 21:01:58,873 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK] 2023-02-14 21:01:58,873 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK] 2023-02-14 21:01:58,875 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/WALs/jenkins-hbase12.apache.org,42465,1676408518022/jenkins-hbase12.apache.org%2C42465%2C1676408518022.meta.1676408518859.meta 2023-02-14 21:01:58,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39473,DS-ca50d022-0459-4bdc-a5ec-fde8fda75278,DISK], DatanodeInfoWithStorage[127.0.0.1:33583,DS-c33e2c50-51d2-4e38-a571-06a91f2eb02b,DISK], DatanodeInfoWithStorage[127.0.0.1:42467,DS-29565ff7-2000-4cd4-a62b-e55888461b26,DISK]] 2023-02-14 21:01:58,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-02-14 21:01:58,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-02-14 21:01:58,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(8546): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-02-14 21:01:58,877 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-02-14 21:01:58,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-02-14 21:01:58,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:58,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 1588230740 2023-02-14 21:01:58,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 1588230740 2023-02-14 21:01:58,879 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-14 21:01:58,881 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/info 2023-02-14 21:01:58,881 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/info 2023-02-14 21:01:58,881 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-14 21:01:58,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-14 21:01:58,883 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/rep_barrier 2023-02-14 21:01:58,883 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/rep_barrier 2023-02-14 21:01:58,884 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-14 21:01:58,884 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,885 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-14 21:01:58,886 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/table 2023-02-14 21:01:58,886 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/table 2023-02-14 21:01:58,886 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-14 21:01:58,887 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:58,888 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740 2023-02-14 21:01:58,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740 2023-02-14 21:01:58,894 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-14 21:01:58,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-14 21:01:58,896 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=60658882, jitterRate=-0.0961122214794159}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-14 21:01:58,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 1588230740: 2023-02-14 21:01:58,898 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1676408518846 2023-02-14 21:01:58,904 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:meta,,1.1588230740 2023-02-14 21:01:58,904 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:meta,,1.1588230740 2023-02-14 21:01:58,905 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,42465,1676408518022, state=OPEN 2023-02-14 21:01:59,072 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-02-14 21:01:59,073 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-02-14 21:01:59,094 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-02-14 21:01:59,095 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-14 21:01:59,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-02-14 21:01:59,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,42465,1676408518022 in 408 msec 2023-02-14 21:01:59,106 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-02-14 21:01:59,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 613 msec 2023-02-14 21:01:59,109 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 688 msec 2023-02-14 21:01:59,109 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1676408519109, completionTime=-1 2023-02-14 21:01:59,109 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-02-14 21:01:59,110 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1519): Joining cluster... 2023-02-14 21:01:59,112 DEBUG [hconnection-0x18e58072-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-14 21:01:59,114 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:59298, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-14 21:01:59,116 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1531): Number of RegionServers=3 2023-02-14 21:01:59,116 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1676408579116 2023-02-14 21:01:59,116 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1676408639116 2023-02-14 21:01:59,116 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1538): Joined the cluster in 6 msec 2023-02-14 21:01:59,147 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,39877,1676408517795-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:59,148 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,39877,1676408517795-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:59,148 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,39877,1676408517795-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:59,148 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase12:39877, period=300000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:59,148 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-02-14 21:01:59,148 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-02-14 21:01:59,148 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-14 21:01:59,151 DEBUG [master/jenkins-hbase12:0.Chore.1] janitor.CatalogJanitor(175): 2023-02-14 21:01:59,151 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-02-14 21:01:59,155 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-02-14 21:01:59,157 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-02-14 21:01:59,160 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/.tmp/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,161 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/.tmp/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da empty. 2023-02-14 21:01:59,162 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/.tmp/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,162 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-02-14 21:01:59,179 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-02-14 21:01:59,180 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7671): creating {ENCODED => 43d9d1f973716daa8d97f0a866cbf2da, NAME => 'hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/.tmp 2023-02-14 21:01:59,195 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:59,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1603): Closing 43d9d1f973716daa8d97f0a866cbf2da, disabling compactions & flushes 2023-02-14 21:01:59,196 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. after waiting 0 ms 2023-02-14 21:01:59,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,196 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1837): Closed hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1557): Region close journal for 43d9d1f973716daa8d97f0a866cbf2da: 2023-02-14 21:01:59,200 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-02-14 21:01:59,201 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1676408519201"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1676408519201"}]},"ts":"1676408519201"} 2023-02-14 21:01:59,205 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-02-14 21:01:59,206 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-02-14 21:01:59,206 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1676408519206"}]},"ts":"1676408519206"} 2023-02-14 21:01:59,208 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-02-14 21:01:59,232 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-14 21:01:59,233 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-14 21:01:59,233 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-14 21:01:59,233 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-14 21:01:59,233 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-14 21:01:59,234 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=43d9d1f973716daa8d97f0a866cbf2da, ASSIGN}] 2023-02-14 21:01:59,238 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=43d9d1f973716daa8d97f0a866cbf2da, ASSIGN 2023-02-14 21:01:59,240 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=43d9d1f973716daa8d97f0a866cbf2da, ASSIGN; state=OFFLINE, location=jenkins-hbase12.apache.org,42465,1676408518022; forceNewPlan=false, retain=false 2023-02-14 21:01:59,391 INFO [jenkins-hbase12:39877] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-02-14 21:01:59,393 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=43d9d1f973716daa8d97f0a866cbf2da, regionState=OPENING, regionLocation=jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:59,394 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1676408519393"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1676408519393"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1676408519393"}]},"ts":"1676408519393"} 2023-02-14 21:01:59,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 43d9d1f973716daa8d97f0a866cbf2da, server=jenkins-hbase12.apache.org,42465,1676408518022}] 2023-02-14 21:01:59,564 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 43d9d1f973716daa8d97f0a866cbf2da, NAME => 'hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.', STARTKEY => '', ENDKEY => ''} 2023-02-14 21:01:59,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-14 21:01:59,566 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,566 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,568 INFO [StoreOpener-43d9d1f973716daa8d97f0a866cbf2da-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,570 DEBUG [StoreOpener-43d9d1f973716daa8d97f0a866cbf2da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/info 2023-02-14 21:01:59,570 DEBUG [StoreOpener-43d9d1f973716daa8d97f0a866cbf2da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/info 2023-02-14 21:01:59,572 INFO [StoreOpener-43d9d1f973716daa8d97f0a866cbf2da-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43d9d1f973716daa8d97f0a866cbf2da columnFamilyName info 2023-02-14 21:01:59,573 INFO [StoreOpener-43d9d1f973716daa8d97f0a866cbf2da-1] regionserver.HStore(310): Store=43d9d1f973716daa8d97f0a866cbf2da/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-14 21:01:59,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,579 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:01:59,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-14 21:01:59,583 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 43d9d1f973716daa8d97f0a866cbf2da; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=68738312, jitterRate=0.024280667304992676}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-14 21:01:59,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 43d9d1f973716daa8d97f0a866cbf2da: 2023-02-14 21:01:59,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da., pid=6, masterSystemTime=1676408519554 2023-02-14 21:01:59,588 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,588 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:01:59,592 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=43d9d1f973716daa8d97f0a866cbf2da, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:01:59,592 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1676408519592"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1676408519592"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1676408519592"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1676408519592"}]},"ts":"1676408519592"} 2023-02-14 21:01:59,599 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-02-14 21:01:59,599 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 43d9d1f973716daa8d97f0a866cbf2da, server=jenkins-hbase12.apache.org,42465,1676408518022 in 196 msec 2023-02-14 21:01:59,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-02-14 21:01:59,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=43d9d1f973716daa8d97f0a866cbf2da, ASSIGN in 365 msec 2023-02-14 21:01:59,605 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-02-14 21:01:59,605 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1676408519605"}]},"ts":"1676408519605"} 2023-02-14 21:01:59,607 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-02-14 21:01:59,621 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-02-14 21:01:59,623 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 472 msec 2023-02-14 21:01:59,653 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-02-14 21:01:59,665 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-02-14 21:01:59,665 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:01:59,670 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-02-14 21:01:59,694 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-14 21:01:59,711 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 38 msec 2023-02-14 21:01:59,725 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-02-14 21:01:59,781 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-14 21:01:59,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 168 msec 2023-02-14 21:01:59,957 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-02-14 21:01:59,978 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-02-14 21:01:59,978 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.860sec 2023-02-14 21:01:59,978 INFO [master/jenkins-hbase12:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-02-14 21:01:59,978 INFO [master/jenkins-hbase12:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-02-14 21:01:59,978 INFO [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-02-14 21:01:59,979 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,39877,1676408517795-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-02-14 21:01:59,979 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,39877,1676408517795-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-02-14 21:01:59,982 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-02-14 21:01:59,985 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ReadOnlyZKClient(139): Connect 0x5f1b95d9 to 127.0.0.1:53584 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:02:00,003 DEBUG [Listener at localhost.localdomain/46685] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5cc5fab3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:02:00,005 DEBUG [hconnection-0x70993a04-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-14 21:02:00,007 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:59302, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-14 21:02:00,011 INFO [Listener at localhost.localdomain/46685] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:02:00,011 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ReadOnlyZKClient(139): Connect 0x717c2a4e to 127.0.0.1:53584 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-14 21:02:00,024 DEBUG [ReadOnlyZKClient-127.0.0.1:53584@0x717c2a4e] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59809701, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-14 21:02:00,025 DEBUG [Listener at localhost.localdomain/46685] client.ConnectionUtils(586): Start fetching master stub from registry 2023-02-14 21:02:00,027 DEBUG [ReadOnlyZKClient-127.0.0.1:53584@0x717c2a4e] client.AsyncConnectionImpl(289): The fetched master address is jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:02:00,027 DEBUG [ReadOnlyZKClient-127.0.0.1:53584@0x717c2a4e] client.ConnectionUtils(594): The fetched master stub is org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$Stub@5dbf18e9 2023-02-14 21:02:00,031 DEBUG [ReadOnlyZKClient-127.0.0.1:53584@0x717c2a4e] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-02-14 21:02:00,033 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:58558, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-02-14 21:02:00,033 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39877] master.MasterRpcServices(1560): Client=jenkins//136.243.104.168 shutdown 2023-02-14 21:02:00,034 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39877] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:02:00,044 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-14 21:02:00,044 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-14 21:02:00,044 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39877] procedure2.ProcedureExecutor(629): Stopping 2023-02-14 21:02:00,044 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-14 21:02:00,044 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-14 21:02:00,046 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:02:00,047 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:02:00,046 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:02:00,046 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39877] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7fc748c4 to 127.0.0.1:53584 2023-02-14 21:02:00,047 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:02:00,047 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-14 21:02:00,047 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39877] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,050 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,33857,1676408517983' ***** 2023-02-14 21:02:00,050 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(2310): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-02-14 21:02:00,052 INFO [RS:0;jenkins-hbase12:33857] regionserver.HeapMemoryManager(220): Stopping 2023-02-14 21:02:00,052 INFO [RS:0;jenkins-hbase12:33857] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-14 21:02:00,052 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-14 21:02:00,052 INFO [RS:0;jenkins-hbase12:33857] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-14 21:02:00,053 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:02:00,053 DEBUG [RS:0;jenkins-hbase12:33857] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3725ec51 to 127.0.0.1:53584 2023-02-14 21:02:00,053 DEBUG [RS:0;jenkins-hbase12:33857] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,054 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,33857,1676408517983; all regions closed. 2023-02-14 21:02:00,063 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:02:00,066 DEBUG [RS:0;jenkins-hbase12:33857] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs 2023-02-14 21:02:00,066 INFO [RS:0;jenkins-hbase12:33857] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C33857%2C1676408517983:(num 1676408518794) 2023-02-14 21:02:00,066 DEBUG [RS:0;jenkins-hbase12:33857] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,066 INFO [RS:0;jenkins-hbase12:33857] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:02:00,067 INFO [RS:0;jenkins-hbase12:33857] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-02-14 21:02:00,067 INFO [RS:0;jenkins-hbase12:33857] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-14 21:02:00,067 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:02:00,067 INFO [RS:0;jenkins-hbase12:33857] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-14 21:02:00,067 INFO [RS:0;jenkins-hbase12:33857] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-14 21:02:00,068 INFO [RS:0;jenkins-hbase12:33857] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:33857 2023-02-14 21:02:00,070 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,46837,1676408518057' ***** 2023-02-14 21:02:00,070 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(2310): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-02-14 21:02:00,071 INFO [RS:2;jenkins-hbase12:46837] regionserver.HeapMemoryManager(220): Stopping 2023-02-14 21:02:00,072 INFO [RS:2;jenkins-hbase12:46837] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-14 21:02:00,072 INFO [RS:2;jenkins-hbase12:46837] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-14 21:02:00,072 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:02:00,072 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-14 21:02:00,072 DEBUG [RS:2;jenkins-hbase12:46837] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x656ca820 to 127.0.0.1:53584 2023-02-14 21:02:00,072 DEBUG [RS:2;jenkins-hbase12:46837] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,072 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,46837,1676408518057; all regions closed. 2023-02-14 21:02:00,075 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:02:00,076 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:02:00,076 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:02:00,076 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,077 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6e75d1 rejected from java.util.concurrent.ThreadPoolExecutor@451f104[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,076 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,077 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1065): Closing user regions 2023-02-14 21:02:00,077 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 2023-02-14 21:02:00,078 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(3304): Received CLOSE for 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:02:00,077 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,078 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 43d9d1f973716daa8d97f0a866cbf2da, disabling compactions & flushes 2023-02-14 21:02:00,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:02:00,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:02:00,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. after waiting 0 ms 2023-02-14 21:02:00,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:02:00,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 43d9d1f973716daa8d97f0a866cbf2da 1/1 column families, dataSize=78 B heapSize=488 B 2023-02-14 21:02:00,082 DEBUG [RS:2;jenkins-hbase12:46837] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs 2023-02-14 21:02:00,082 INFO [RS:2;jenkins-hbase12:46837] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C46837%2C1676408518057:(num 1676408518810) 2023-02-14 21:02:00,082 DEBUG [RS:2;jenkins-hbase12:46837] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,082 INFO [RS:2;jenkins-hbase12:46837] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:02:00,083 INFO [RS:2;jenkins-hbase12:46837] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-02-14 21:02:00,083 INFO [RS:2;jenkins-hbase12:46837] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-14 21:02:00,083 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:02:00,083 INFO [RS:2;jenkins-hbase12:46837] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-14 21:02:00,083 INFO [RS:2;jenkins-hbase12:46837] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-14 21:02:00,084 INFO [RS:2;jenkins-hbase12:46837] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:46837 2023-02-14 21:02:00,097 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase12.apache.org,33857,1676408517983] 2023-02-14 21:02:00,097 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase12.apache.org,33857,1676408517983; numProcessing=1 2023-02-14 21:02:00,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/.tmp/info/04d39faf6bf549398c07e02b7ea82dc9 2023-02-14 21:02:00,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:02:00,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,107 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:02:00,107 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@cea57b2 rejected from java.util.concurrent.ThreadPoolExecutor@8d5c2f7[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,108 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 2023-02-14 21:02:00,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/.tmp/info/04d39faf6bf549398c07e02b7ea82dc9 as hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/info/04d39faf6bf549398c07e02b7ea82dc9 2023-02-14 21:02:00,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/info/04d39faf6bf549398c07e02b7ea82dc9, entries=2, sequenceid=6, filesize=4.8 K 2023-02-14 21:02:00,118 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase12.apache.org,33857,1676408517983 already deleted, retry=false 2023-02-14 21:02:00,118 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase12.apache.org,33857,1676408517983 expired; onlineServers=2 2023-02-14 21:02:00,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 43d9d1f973716daa8d97f0a866cbf2da in 39ms, sequenceid=6, compaction requested=false 2023-02-14 21:02:00,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/namespace/43d9d1f973716daa8d97f0a866cbf2da/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-02-14 21:02:00,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:02:00,128 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase12.apache.org,46837,1676408518057] 2023-02-14 21:02:00,128 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase12.apache.org,46837,1676408518057; numProcessing=2 2023-02-14 21:02:00,129 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase12.apache.org,33857,1676408517983 znode expired, triggering replicatorRemoved event 2023-02-14 21:02:00,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:02:00,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 43d9d1f973716daa8d97f0a866cbf2da: 2023-02-14 21:02:00,130 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39877] assignment.AssignmentManager(1094): RegionServer CLOSED 43d9d1f973716daa8d97f0a866cbf2da 2023-02-14 21:02:00,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1676408519148.43d9d1f973716daa8d97f0a866cbf2da. 2023-02-14 21:02:00,139 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase12.apache.org,46837,1676408518057 already deleted, retry=false 2023-02-14 21:02:00,139 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase12.apache.org,46837,1676408518057 expired; onlineServers=1 2023-02-14 21:02:00,139 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:02:00,141 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:02:00,141 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase12.apache.org,46837,1676408518057 znode expired, triggering replicatorRemoved event 2023-02-14 21:02:00,142 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:02:00,181 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1084): Waiting on 1588230740 2023-02-14 21:02:00,257 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,258 INFO [RS:2;jenkins-hbase12:46837] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,46837,1676408518057; zookeeper connection closed. 2023-02-14 21:02:00,258 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@22d9b425 rejected from java.util.concurrent.ThreadPoolExecutor@8d5c2f7[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,258 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3985578d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3985578d 2023-02-14 21:02:00,259 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:46837-0x1016347bff40003, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,260 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@5ab4870c rejected from java.util.concurrent.ThreadPoolExecutor@8d5c2f7[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,283 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,42465,1676408518022' ***** 2023-02-14 21:02:00,284 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(2310): STOPPED: Stopped; only catalog regions remaining online 2023-02-14 21:02:00,284 INFO [RS:1;jenkins-hbase12:42465] regionserver.HeapMemoryManager(220): Stopping 2023-02-14 21:02:00,284 INFO [RS:1;jenkins-hbase12:42465] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-14 21:02:00,284 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-14 21:02:00,284 INFO [RS:1;jenkins-hbase12:42465] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-14 21:02:00,285 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:02:00,286 DEBUG [RS:1;jenkins-hbase12:42465] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ff25158 to 127.0.0.1:53584 2023-02-14 21:02:00,286 DEBUG [RS:1;jenkins-hbase12:42465] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,286 INFO [RS:1;jenkins-hbase12:42465] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-14 21:02:00,286 INFO [RS:1;jenkins-hbase12:42465] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-14 21:02:00,286 INFO [RS:1;jenkins-hbase12:42465] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-14 21:02:00,286 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(3304): Received CLOSE for 1588230740 2023-02-14 21:02:00,287 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1475): Waiting on 1 regions to close 2023-02-14 21:02:00,287 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1479): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-02-14 21:02:00,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-14 21:02:00,288 DEBUG [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1505): Waiting on 1588230740 2023-02-14 21:02:00,288 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-14 21:02:00,289 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-14 21:02:00,289 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-14 21:02:00,289 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-14 21:02:00,289 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-02-14 21:02:00,309 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/.tmp/info/93309d9392d04fef813acb285c78a828 2023-02-14 21:02:00,329 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/.tmp/table/1c504c8c5ac04185863eeed8c655d5a3 2023-02-14 21:02:00,337 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/.tmp/info/93309d9392d04fef813acb285c78a828 as hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/info/93309d9392d04fef813acb285c78a828 2023-02-14 21:02:00,345 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/info/93309d9392d04fef813acb285c78a828, entries=10, sequenceid=9, filesize=5.9 K 2023-02-14 21:02:00,347 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/.tmp/table/1c504c8c5ac04185863eeed8c655d5a3 as hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/table/1c504c8c5ac04185863eeed8c655d5a3 2023-02-14 21:02:00,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/table/1c504c8c5ac04185863eeed8c655d5a3, entries=2, sequenceid=9, filesize=4.7 K 2023-02-14 21:02:00,358 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,358 INFO [RS:0;jenkins-hbase12:33857] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,33857,1676408517983; zookeeper connection closed. 2023-02-14 21:02:00,358 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3eb378e rejected from java.util.concurrent.ThreadPoolExecutor@451f104[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,358 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5678d816] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5678d816 2023-02-14 21:02:00,358 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 69ms, sequenceid=9, compaction requested=false 2023-02-14 21:02:00,358 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:33857-0x1016347bff40001, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,359 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@7b0a0c01 rejected from java.util.concurrent.ThreadPoolExecutor@451f104[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,367 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-02-14 21:02:00,367 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-02-14 21:02:00,368 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-14 21:02:00,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 1588230740: 2023-02-14 21:02:00,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-02-14 21:02:00,371 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:02:00,489 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,42465,1676408518022; all regions closed. 2023-02-14 21:02:00,501 DEBUG [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs 2023-02-14 21:02:00,501 INFO [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C42465%2C1676408518022.meta:.meta(num 1676408518859) 2023-02-14 21:02:00,506 DEBUG [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/oldWALs 2023-02-14 21:02:00,506 INFO [RS:1;jenkins-hbase12:42465] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C42465%2C1676408518022:(num 1676408518817) 2023-02-14 21:02:00,506 DEBUG [RS:1;jenkins-hbase12:42465] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,507 INFO [RS:1;jenkins-hbase12:42465] regionserver.LeaseManager(133): Closed leases 2023-02-14 21:02:00,507 INFO [RS:1;jenkins-hbase12:42465] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-02-14 21:02:00,507 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:02:00,508 INFO [RS:1;jenkins-hbase12:42465] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:42465 2023-02-14 21:02:00,518 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,42465,1676408518022 2023-02-14 21:02:00,518 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,518 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@37e00fd6 rejected from java.util.concurrent.ThreadPoolExecutor@33df9df2[Shutting down, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,518 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-14 21:02:00,519 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@70486e52 rejected from java.util.concurrent.ThreadPoolExecutor@33df9df2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,528 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase12.apache.org,42465,1676408518022] 2023-02-14 21:02:00,529 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase12.apache.org,42465,1676408518022; numProcessing=3 2023-02-14 21:02:00,539 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase12.apache.org,42465,1676408518022 already deleted, retry=false 2023-02-14 21:02:00,539 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase12.apache.org,42465,1676408518022 expired; onlineServers=0 2023-02-14 21:02:00,539 INFO [RegionServerTracker-0] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,39877,1676408517795' ***** 2023-02-14 21:02:00,539 INFO [RegionServerTracker-0] regionserver.HRegionServer(2310): STOPPED: Cluster shutdown set; onlineServer=0 2023-02-14 21:02:00,540 DEBUG [M:0;jenkins-hbase12:39877] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@328441a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-14 21:02:00,540 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,39877,1676408517795 2023-02-14 21:02:00,540 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,39877,1676408517795; all regions closed. 2023-02-14 21:02:00,540 DEBUG [M:0;jenkins-hbase12:39877] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,541 DEBUG [M:0;jenkins-hbase12:39877] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-02-14 21:02:00,541 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-02-14 21:02:00,542 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1676408518431] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1676408518431,5,FailOnTimeoutGroup] 2023-02-14 21:02:00,541 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1676408518432] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1676408518432,5,FailOnTimeoutGroup] 2023-02-14 21:02:00,541 DEBUG [M:0;jenkins-hbase12:39877] cleaner.HFileCleaner(317): Stopping file delete threads 2023-02-14 21:02:00,544 INFO [M:0;jenkins-hbase12:39877] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-02-14 21:02:00,544 INFO [M:0;jenkins-hbase12:39877] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-02-14 21:02:00,544 INFO [M:0;jenkins-hbase12:39877] hbase.ChoreService(369): Chore service for: master/jenkins-hbase12:0 had [] on shutdown 2023-02-14 21:02:00,544 DEBUG [M:0;jenkins-hbase12:39877] master.HMaster(1512): Stopping service threads 2023-02-14 21:02:00,544 INFO [M:0;jenkins-hbase12:39877] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-02-14 21:02:00,544 ERROR [M:0;jenkins-hbase12:39877] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-2,5,PEWorkerGroup] 2023-02-14 21:02:00,546 INFO [M:0;jenkins-hbase12:39877] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-02-14 21:02:00,546 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-02-14 21:02:00,549 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-14 21:02:00,550 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-14 21:02:00,550 DEBUG [M:0;jenkins-hbase12:39877] zookeeper.ZKUtil(398): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-02-14 21:02:00,550 WARN [M:0;jenkins-hbase12:39877] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-02-14 21:02:00,550 INFO [M:0;jenkins-hbase12:39877] assignment.AssignmentManager(315): Stopping assignment manager 2023-02-14 21:02:00,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-14 21:02:00,551 INFO [M:0;jenkins-hbase12:39877] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-02-14 21:02:00,552 DEBUG [M:0;jenkins-hbase12:39877] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-14 21:02:00,552 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:02:00,552 DEBUG [M:0;jenkins-hbase12:39877] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:02:00,552 DEBUG [M:0;jenkins-hbase12:39877] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-14 21:02:00,552 DEBUG [M:0;jenkins-hbase12:39877] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:02:00,552 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegion(2744): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-02-14 21:02:00,553 INFO [Listener at localhost.localdomain/46685] client.AsyncConnectionImpl(207): Connection has been closed by Listener at localhost.localdomain/46685. 2023-02-14 21:02:00,554 DEBUG [Listener at localhost.localdomain/46685] client.AsyncConnectionImpl(232): Call stack: at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.client.AsyncConnectionImpl.close(AsyncConnectionImpl.java:209) at org.apache.hbase.thirdparty.com.google.common.io.Closeables.close(Closeables.java:79) at org.apache.hadoop.hbase.client.TestAsyncClusterAdminApi2.tearDown(TestAsyncClusterAdminApi2.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-02-14 21:02:00,554 DEBUG [Listener at localhost.localdomain/46685] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,555 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x717c2a4e to 127.0.0.1:53584 2023-02-14 21:02:00,555 INFO [Listener at localhost.localdomain/46685] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-02-14 21:02:00,556 DEBUG [Listener at localhost.localdomain/46685] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f1b95d9 to 127.0.0.1:53584 2023-02-14 21:02:00,556 DEBUG [Listener at localhost.localdomain/46685] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-14 21:02:00,557 DEBUG [Listener at localhost.localdomain/46685] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-02-14 21:02:00,576 INFO [M:0;jenkins-hbase12:39877] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/55cc074cf9a54a37af778e15e3ace486 2023-02-14 21:02:00,585 DEBUG [M:0;jenkins-hbase12:39877] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/55cc074cf9a54a37af778e15e3ace486 as hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/55cc074cf9a54a37af778e15e3ace486 2023-02-14 21:02:00,591 INFO [M:0;jenkins-hbase12:39877] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35245/user/jenkins/test-data/f30dd718-a24a-4032-cf08-d8c96e978574/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/55cc074cf9a54a37af778e15e3ace486, entries=8, sequenceid=66, filesize=6.3 K 2023-02-14 21:02:00,593 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegion(2947): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 40ms, sequenceid=66, compaction requested=false 2023-02-14 21:02:00,594 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-14 21:02:00,594 DEBUG [M:0;jenkins-hbase12:39877] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-02-14 21:02:00,598 INFO [M:0;jenkins-hbase12:39877] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-02-14 21:02:00,598 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-14 21:02:00,598 INFO [M:0;jenkins-hbase12:39877] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:39877 2023-02-14 21:02:00,609 DEBUG [M:0;jenkins-hbase12:39877] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase12.apache.org,39877,1676408517795 already deleted, retry=false 2023-02-14 21:02:00,659 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,659 INFO [RS:1;jenkins-hbase12:42465] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,42465,1676408518022; zookeeper connection closed. 2023-02-14 21:02:00,659 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@81e6421 rejected from java.util.concurrent.ThreadPoolExecutor@33df9df2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,660 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@14c05d2e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@14c05d2e 2023-02-14 21:02:00,660 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): regionserver:42465-0x1016347bff40002, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,660 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@257b3cd0 rejected from java.util.concurrent.ThreadPoolExecutor@33df9df2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,660 INFO [Listener at localhost.localdomain/46685] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-02-14 21:02:00,860 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,860 INFO [M:0;jenkins-hbase12:39877] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,39877,1676408517795; zookeeper connection closed. 2023-02-14 21:02:00,861 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@eec83de rejected from java.util.concurrent.ThreadPoolExecutor@ad316db[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 28] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,862 WARN [Listener at localhost.localdomain/46685] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-14 21:02:00,863 DEBUG [Listener at localhost.localdomain/46685-EventThread] zookeeper.ZKWatcher(600): master:39877-0x1016347bff40000, quorum=127.0.0.1:53584, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-14 21:02:00,864 ERROR [Listener at localhost.localdomain/46685-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@144c7391 rejected from java.util.concurrent.ThreadPoolExecutor@ad316db[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 28] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-14 21:02:00,920 INFO [Listener at localhost.localdomain/46685] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-14 21:02:01,030 WARN [BP-930859010-136.243.104.168-1676408515040 heartbeating to localhost.localdomain/127.0.0.1:35245] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-14 21:02:01,030 WARN [BP-930859010-136.243.104.168-1676408515040 heartbeating to localhost.localdomain/127.0.0.1:35245] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-930859010-136.243.104.168-1676408515040 (Datanode Uuid 232cf55c-0418-4fb9-8ec0-fbd6e0686cdf) service to localhost.localdomain/127.0.0.1:35245 2023-02-14 21:02:01,032 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/dfs/data/data5/current/BP-930859010-136.243.104.168-1676408515040] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:02:01,033 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/dfs/data/data6/current/BP-930859010-136.243.104.168-1676408515040] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:02:01,039 WARN [Listener at localhost.localdomain/46685] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-14 21:02:01,045 INFO [Listener at localhost.localdomain/46685] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-14 21:02:01,153 WARN [BP-930859010-136.243.104.168-1676408515040 heartbeating to localhost.localdomain/127.0.0.1:35245] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-14 21:02:01,154 WARN [BP-930859010-136.243.104.168-1676408515040 heartbeating to localhost.localdomain/127.0.0.1:35245] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-930859010-136.243.104.168-1676408515040 (Datanode Uuid 97f2e3f1-205b-42b3-8064-dffb9b3c2d6d) service to localhost.localdomain/127.0.0.1:35245 2023-02-14 21:02:01,155 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/dfs/data/data3/current/BP-930859010-136.243.104.168-1676408515040] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:02:01,156 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/dfs/data/data4/current/BP-930859010-136.243.104.168-1676408515040] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:02:01,160 WARN [Listener at localhost.localdomain/46685] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-14 21:02:01,164 INFO [Listener at localhost.localdomain/46685] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-14 21:02:01,272 WARN [BP-930859010-136.243.104.168-1676408515040 heartbeating to localhost.localdomain/127.0.0.1:35245] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-14 21:02:01,272 WARN [BP-930859010-136.243.104.168-1676408515040 heartbeating to localhost.localdomain/127.0.0.1:35245] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-930859010-136.243.104.168-1676408515040 (Datanode Uuid 113f4b12-2e6b-4fb5-b3fa-903d698b0296) service to localhost.localdomain/127.0.0.1:35245 2023-02-14 21:02:01,274 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/dfs/data/data1/current/BP-930859010-136.243.104.168-1676408515040] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:02:01,274 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/46f6222e-855f-b9b5-cfd7-078f29854fef/cluster_5ca65044-4646-225b-3c6e-01c7d1dead07/dfs/data/data2/current/BP-930859010-136.243.104.168-1676408515040] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-14 21:02:01,294 INFO [Listener at localhost.localdomain/46685] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-02-14 21:02:01,415 INFO [Listener at localhost.localdomain/46685] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-02-14 21:02:01,438 INFO [Listener at localhost.localdomain/46685] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-02-14 21:02:01,448 INFO [Listener at localhost.localdomain/46685] hbase.ResourceChecker(175): after: client.TestAsyncClusterAdminApi2#testShutdown Thread=111 (was 82) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-9-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:35245 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-10-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-8-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-8-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:35245 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-10-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/46685 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-9-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-11-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:35245 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:35245 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:35245 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:35245 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:35245 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:35245 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-10-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-8-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-9-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:35245 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=537 (was 497) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=284 (was 291), ProcessCount=170 (was 170), AvailableMemoryMB=5023 (was 5082)